The nonuniformity of antibody distribution in the kidney and its influence on dosimetry.
Flynn, Aiden A; Pedley, R Barbara; Green, Alan J; Dearling, Jason L; El-Emir, Ethaar; Boxer, Geoffrey M; Boden, Robert; Begent, Richard H J
2003-02-01
The therapeutic efficacy of radiolabeled antibody fragments can be limited by nephrotoxicity, particularly when the kidney is the major route of extraction from the circulation. Conventional dose estimates in kidney assume uniform dose deposition, but we have shown increased antibody localization in the cortex after glomerular filtration. The purpose of this study was to measure the radioactivity in cortex relative to medulla for a range of antibodies and to assess the validity of the assumption of uniformity of dose deposition in the whole kidney and in the cortex for these antibodies with a range of radionuclides. Storage phosphor plate technology (radioluminography) was used to acquire images of the distributions of a range of antibodies of various sizes, labeled with 125I, in kidney sections. This allowed the calculation of the antibody concentration in the cortex relative to the medulla. Beta-particle point dose kernels were then used to generate the dose-rate distributions from 14C, 131I, 186Re, 32P and 90Y. The correlation between the actual dose-rate distribution and the corresponding distribution calculated assuming uniform antibody distribution throughout the kidney was used to test the validity of estimating dose by assuming uniformity in the kidney and in the cortex. There was a strong inverse relationship between the ratio of the radioactivity in the cortex relative to that in the medulla and the antibody size. The nonuniformity of dose deposition was greatest with the smallest antibody fragments but became more uniform as the range of the emissions from the radionuclide increased. Furthermore, there was a strong correlation between the actual dose-rate distribution and the distribution when assuming a uniform source in the kidney for intact antibodies along with medium- to long-range radionuclides, but there was no correlation for small antibody fragments with any radioisotope or for short-range radionuclides with any antibody. However, when the cortex was separated from the whole kidney, the correlation between the actual dose-rate distribution and the assumed dose-rate distribution, if the source was uniform, increased significantly. During radioimmunotherapy, the extent of nonuniformity of dose deposition in the kidney depends on the properties of the antibody and radionuclide. For dosimetry estimates, the cortex should be taken as a separate source region when the radiopharmaceutical is small enough to be filtered by the glomerulus.
Trapping of Neutrinos in Extremely Compact Stars and the Influence of Brane Tension on This Process
NASA Astrophysics Data System (ADS)
Stuchlík, Zdenäěk; Hladík, Jan; Urbanec, Martin
We present estimates on the efficiency of neutrino trapping in brany extremely compact stars, using the simplest model with uniform distribution of energy density, assuming massless neutrinos and uniform distribution of neutrino emissivity. Computation have been done for two different uniform-density stellar solution in the Randall-Sundrum II type braneworld, namely with the Reissner-Nordström-type of geometry and the second one, derived by Germani and Maartens.1
Continuous-variable quantum key distribution in uniform fast-fading channels
NASA Astrophysics Data System (ADS)
Papanastasiou, Panagiotis; Weedbrook, Christian; Pirandola, Stefano
2018-03-01
We investigate the performance of several continuous-variable quantum key distribution protocols in the presence of uniform fading channels. These are lossy channels whose transmissivity changes according to a uniform probability distribution. We assume the worst-case scenario where an eavesdropper induces a fast-fading process, where she chooses the instantaneous transmissivity while the remote parties may only detect the mean statistical effect. We analyze coherent-state protocols in various configurations, including the one-way switching protocol in reverse reconciliation, the measurement-device-independent protocol in the symmetric configuration, and its extension to a three-party network. We show that, regardless of the advantage given to the eavesdropper (control of the fading), these protocols can still achieve high rates under realistic attacks, within reasonable values for the variance of the probability distribution associated with the fading process.
Computer simulation of random variables and vectors with arbitrary probability distribution laws
NASA Technical Reports Server (NTRS)
Bogdan, V. M.
1981-01-01
Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.
A Non-Parametric Probability Density Estimator and Some Applications.
1984-05-01
distributions, which are assumed to be representa- tive of platykurtic , mesokurtic, and leptokurtic distribu- tions in general. The dissertation is... platykurtic distributions. Consider, for example, the uniform distribution shown in Figure 4. 34 o . 1., Figure 4 -Sensitivity to Support Estimation The...results of the density function comparisons indicate that the new estimator is clearly -Z superior for platykurtic distributions, equal to the best 59
ERIC Educational Resources Information Center
Baker, Bruce D.; Ramsey, Matthew J.
2010-01-01
Over the past few decades, a handful of states have chosen to provide state financing of special education programs through a method referred to as "Census-Based" funding--an approach which involves allocated block-grant funding on an assumed basis of uniform distribution of children with disabilities across school districts. The…
Evaluation of Improved Engine Compartment Overheat Detection Techniques.
1986-08-01
radiation properties (emissivity and reflectivity) of the surface. The first task of the numerical procedure is to investigate the radiosity (radiative heat...and radiosity are spatially uniform within each zone. 0 Radiative properties are spatially uniform and independent of direction. 0 The enclosure is...variation in the radiosity will be nonuniform in distribution in that region. The zone analysis method assumes the : . ,. temperature and radiation
Development of a computational technique to measure cartilage contact area.
Willing, Ryan; Lapner, Michael; Lalone, Emily A; King, Graham J W; Johnson, James A
2014-03-21
Computational measurement of joint contact distributions offers the benefit of non-invasive measurements of joint contact without the use of interpositional sensors or casting materials. This paper describes a technique for indirectly measuring joint contact based on overlapping of articular cartilage computer models derived from CT images and positioned using in vitro motion capture data. The accuracy of this technique when using the physiological nonuniform cartilage thickness distribution, or simplified uniform cartilage thickness distributions, is quantified through comparison with direct measurements of contact area made using a casting technique. The efficacy of using indirect contact measurement techniques for measuring the changes in contact area resulting from hemiarthroplasty at the elbow is also quantified. Using the physiological nonuniform cartilage thickness distribution reliably measured contact area (ICC=0.727), but not better than the assumed bone specific uniform cartilage thicknesses (ICC=0.673). When a contact pattern agreement score (s(agree)) was used to assess the accuracy of cartilage contact measurements made using physiological nonuniform or simplified uniform cartilage thickness distributions in terms of size, shape and location, their accuracies were not significantly different (p>0.05). The results of this study demonstrate that cartilage contact can be measured indirectly based on the overlapping of cartilage contact models. However, the results also suggest that in some situations, inter-bone distance measurement and an assumed cartilage thickness may suffice for predicting joint contact patterns. Copyright © 2014 Elsevier Ltd. All rights reserved.
Accretion rates of protoplanets 2: Gaussian distribution of planestesimal velocities
NASA Technical Reports Server (NTRS)
Greenzweig, Yuval; Lissauer, Jack J.
1991-01-01
The growth rate of a protoplanet embedded in a uniform surface density disk of planetesimals having a triaxial Gaussian velocity distribution was calculated. The longitudes of the aspses and nodes of the planetesimals are uniformly distributed, and the protoplanet is on a circular orbit. The accretion rate in the two body approximation is enhanced by a factor of approximately 3, compared to the case where all planetesimals have eccentricity and inclination equal to the root mean square (RMS) values of those variables in the Gaussian distribution disk. Numerical three body integrations show comparable enhancements, except when the RMS initial planetesimal eccentricities are extremely small. This enhancement in accretion rate should be incorporated by all models, analytical or numerical, which assume a single random velocity for all planetesimals, in lieu of a Gaussian distribution.
Limitations on the applicability of FODO lattices for electron cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertsche, K.J.
1997-09-01
Assuming a KV beam distribution (a uniform distribution over an elliptical region of transverse phase space), the beam envelop equations are shown, where X and Y are the transverse beam sizes, {kappa} is the lens strength, K is the generalized beam perveance, and {epsilon} is the beam emittance. If we further assume operation in a space-charge dominated regime, the right most term can be ignored in each equation. In this case, particle flow will be laminar, and the above equations not only describe the envelope of the beam, but also the trajectory of the outermost particles.
Comparison of Uniform Flux and Uniform Head Wellbore Boundary for the Multilevel Slug Test
NASA Astrophysics Data System (ADS)
Chen, C.
2012-12-01
The multilevel slug test (MLST) is useful in characterizing the vertical distribution of hydraulic conductivity K(z) around a well. Most MLST models assume a uniform flux (UF) distribution along the screen length ls during the test. This assumption leads to a nonuniform head distribution along ls, which is of question under the field conditions. To this end, the head distribution along ls is assumed to be uniform (UH). The MLST model associated with the UH assumption is mathematically more complicated and thus is less used. The difference of using UF and UH in modeling the MLST is investigated here for confined aquifers. For the low-K conditions of monotonic recovery of well water level, it is found that the well water level recovery predicted by the UH model is faster than that predicted by the UF model, and the discrepancy is more pronounced for a larger aspect ratio of rw/ls with rw being the well radius, a smaller partial penetration ratio of ls/b with b being aquifer thickness, and/or a smaller anisotropy ratio of Kz/Kr. For the high-K condition where oscillatory well water level recovery is oscillatory about its initial position, it is found that amplitude of the oscillatory recovery predicted by the UH model is larger than that by the UF model, and the discrepancy gets larger for a larger aspect ratio, a smaller partial penetration ratio, or a smaller anisotropy ratio. For the fully penetrating condition, both the UH and UF models give the same results, regardless of low- or high-K conditions. For the same set of data, the K value estimated by the UH model will be greater than that by the UF model.
The Unevenly Distributed Nearest Brown Dwarfs
NASA Astrophysics Data System (ADS)
Bihain, Gabriel; Scholz, Ralf-Dieter
2016-08-01
To address the questions of how many brown dwarfs there are in the Milky Way, how do these objects relate to star formation, and whether the brown dwarf formation rate was different in the past, the star-to-brown dwarf number ratio can be considered. While main sequence stars are well known components of the solar neighborhood, lower mass, substellar objects increasingly add to the census of the nearest objects. The sky projection of the known objects at <6.5 pc shows that stars present a uniform distribution and brown dwarfs a non-uniform distribution, with about four times more brown dwarfs behind than ahead of the Sun relative to the direction of rotation of the Galaxy. Assuming that substellar objects distribute uniformly, their observed configuration has a probability of 0.1 %. The helio- and geocentricity of the configuration suggests that it probably results from an observational bias, which if compensated for by future discoveries, would bring the star-to-brown dwarf ratio in agreement with the average ratio found in star forming regions.
Calibration of pavement response models for the mechanistic-empirical pavement design method
DOT National Transportation Integrated Search
2007-09-01
Most pavement design methodologies assume that the tire-pavement contact stress is equal to the tire inflation pressure and uniformly distributed over a circular contact area. However, tire-pavement contact area is not in a circular shape and the con...
Efficiency degradation due to tracking errors for point focusing solar collectors
NASA Technical Reports Server (NTRS)
Hughes, R. O.
1978-01-01
An important parameter in the design of point focusing solar collectors is the intercept factor which is a measure of efficiency and of energy available for use in the receiver. Using statistical methods, an expression of the expected value of the intercept factor is derived for various configurations and control law implementations. The analysis assumes that a radially symmetric flux distribution (not necessarily Gaussian) is generated at the focal plane due to the sun's finite image and various reflector errors. The time-varying tracking errors are assumed to be uniformly distributed within the threshold limits and allows the expected value calculation.
Simulation of electromagnetic ion cyclotron triggered emissions in the Earth's inner magnetosphere
NASA Astrophysics Data System (ADS)
Shoji, Masafumi; Omura, Yoshiharu
2011-05-01
In a recent observation by the Cluster spacecraft, emissions triggered by electromagnetic ion cyclotron (EMIC) waves were discovered in the inner magnetosphere. We perform hybrid simulations to reproduce the EMIC triggered emissions. We develop a self-consistent one-dimensional hybrid code with a cylindrical geometry of the background magnetic field. We assume a parabolic magnetic field to model the dipole magnetic field in the equatorial region of the inner magnetosphere. Triggering EMIC waves are driven by a left-handed polarized external current assumed at the magnetic equator in the simulation model. Cold proton, helium, and oxygen ions, which form branches of the dispersion relation of the EMIC waves, are uniformly distributed in the simulation space. Energetic protons with a loss cone distribution function are also assumed as resonant particles. We reproduce rising tone emissions in the simulation space, finding a good agreement with the nonlinear wave growth theory. In the energetic proton velocity distribution we find formation of a proton hole, which is assumed in the nonlinear wave growth theory. A substantial amount of the energetic protons are scattered into the loss cone, while some of the resonant protons are accelerated to higher pitch angles, forming a pancake velocity distribution.
Modeling of two-dimensional overland flow in a vegetative filter
Matthew J. Helmers; Dean E. Eisenhauer; Thomas G. Franti; Michael G. Dosskey
2002-01-01
Water transports sediment and other pollutants through vegetative filters. It is often assumed that the overland flow is uniformly distributed across the vegetative filter, but this research indicates otherwise. The objective of this study was to model the two-dimensional overland water flow through a vegetative filter, accounting for variation in microtopography,...
Haak, Danielle M.; Chaine, Noelle M.; Stephen, Bruce J.; Wong, Alec; Allen, Craig R.
2013-01-01
The Chinese mystery snail (Bellamya chinensis) is an aquatic invasive species found throughout the USA. Little is known about this species’ life history or ecology, and only one population estimate has been published, for Wild Plum Lake in southeast Nebraska. A recent die-off event occurred at this same reservoir and we present a mortality estimate for this B. chinensis population using a quadrat approach. Assuming uniform distribution throughout the newly-exposed lake bed (20,900 m2), we estimate 42,845 individuals died during this event, amounting to approximately 17% of the previously-estimated population size of 253,570. Assuming uniform distribution throughout all previously-reported available habitat (48,525 m2), we estimate 99,476 individuals died, comprising 39% of the previously-reported adult population. The die-off occurred during an extreme drought event, which was coincident with abnormally hot weather. However, the exact reason of the die-off is still unclear. More monitoring of the population dynamics of B. chinensis is necessary to further our understanding of this species’ ecology.
Synoptic, Global Mhd Model For The Solar Corona
NASA Astrophysics Data System (ADS)
Cohen, Ofer; Sokolov, I. V.; Roussev, I. I.; Gombosi, T. I.
2007-05-01
The common techniques for mimic the solar corona heating and the solar wind acceleration in global MHD models are as follow. 1) Additional terms in the momentum and energy equations derived from the WKB approximation for the Alfv’en wave turbulence; 2) some empirical heat source in the energy equation; 3) a non-uniform distribution of the polytropic index, γ, used in the energy equation. In our model, we choose the latter approach. However, in order to get a more realistic distribution of γ, we use the empirical Wang-Sheeley-Arge (WSA) model to constrain the MHD solution. The WSA model provides the distribution of the asymptotic solar wind speed from the potential field approximation; therefore it also provides the distribution of the kinetic energy. Assuming that far from the Sun the total energy is dominated by the energy of the bulk motion and assuming the conservation of the Bernoulli integral, we can trace the total energy along a magnetic field line to the solar surface. On the surface the gravity is known and the kinetic energy is negligible. Therefore, we can get the surface distribution of γ as a function of the final speed originating from this point. By interpolation γ to spherically uniform value on the source surface, we use this spatial distribution of γ in the energy equation to obtain a self-consistent, steady state MHD solution for the solar corona. We present the model result for different Carrington Rotations.
Local Burn-Up Effects in the NBSR Fuel Element
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown N. R.; Hanson A.; Diamond, D.
2013-01-31
This study addresses the over-prediction of local power when the burn-up distribution in each half-element of the NBSR is assumed to be uniform. A single-element model was utilized to quantify the impact of axial and plate-wise burn-up on the power distribution within the NBSR fuel elements for both high-enriched uranium (HEU) and low-enriched uranium (LEU) fuel. To validate this approach, key parameters in the single-element model were compared to parameters from an equilibrium core model, including neutron energy spectrum, power distribution, and integral U-235 vector. The power distribution changes significantly when incorporating local burn-up effects and has lower power peakingmore » relative to the uniform burn-up case. In the uniform burn-up case, the axial relative power peaking is over-predicted by as much as 59% in the HEU single-element and 46% in the LEU single-element with uniform burn-up. In the uniform burn-up case, the plate-wise power peaking is over-predicted by as much as 23% in the HEU single-element and 18% in the LEU single-element. The degree of over-prediction increases as a function of burn-up cycle, with the greatest over-prediction at the end of Cycle 8. The thermal flux peak is always in the mid-plane gap; this causes the local cumulative burn-up near the mid-plane gap to be significantly higher than the fuel element average. Uniform burn-up distribution throughout a half-element also causes a bias in fuel element reactivity worth, due primarily to the neutronic importance of the fissile inventory in the mid-plane gap region.« less
The dynamics of spin stabilized spacecraft with movable appendages, part 1
NASA Technical Reports Server (NTRS)
Bainum, P. M.; Sellappan, R.
1975-01-01
The motion and stability of spin stabilized spacecraft with movable external appendages are treated both analytically and numerically. The two basic types of appendages considered are: (1) a telescoping type of varying length and (2) a hinged type of fixed length whose orientation with respect to the main part of the spacecraft can vary. Two classes of telescoping appendages are considered: (a) where an end mass is mounted at the end of an (assumed) massless boom; and (b) where the appendage is assumed to consist of a uniformly distributed homogeneous mass throughout its length. For the telescoping system Eulerian equations of motion are developed. During all deployment sequences it is assumed that the transverse component of angular momentum is much smaller than the component along the major spin axis. Closed form analytical solutions for the time response of the transverse components of angular velocities are obtained when the spacecraft hub has a nearly spherical mass distribution.
Conductorlike behavior of a photoemitting dielectric surface
NASA Technical Reports Server (NTRS)
De, B. R.
1979-01-01
It has been suggested in the past that a uniformly illuminated photoemitting dielectric surface of finite extent acquires in the steady state a surface charge distribution as if the surface were conducting (i.e., the surface becomes equipotential). In this paper an analytical proof of this conductorlike behavior is given. The only restrictions are that the photoelectron emission from the surface has azimuthal symmetry and that the photosheath may be assumed to be collisionless. It is tacitly assumed that a steady state is attainable, which means that the photoelectron spectrum has a high-energy cutoff.
Application of genetic algorithms in nonlinear heat conduction problems.
Kadri, Muhammad Bilal; Khan, Waqar A
2014-01-01
Genetic algorithms are employed to optimize dimensionless temperature in nonlinear heat conduction problems. Three common geometries are selected for the analysis and the concept of minimum entropy generation is used to determine the optimum temperatures under the same constraints. The thermal conductivity is assumed to vary linearly with temperature while internal heat generation is assumed to be uniform. The dimensionless governing equations are obtained for each selected geometry and the dimensionless temperature distributions are obtained using MATLAB. It is observed that GA gives the minimum dimensionless temperature in each selected geometry.
A frequency quantum interpretation of the surface renewal model of mass transfer
Mondal, Chanchal
2017-01-01
The surface of a turbulent liquid is visualized as consisting of a large number of chaotic eddies or liquid elements. Assuming that surface elements of a particular age have renewal frequencies that are integral multiples of a fundamental frequency quantum, and further assuming that the renewal frequency distribution is of the Boltzmann type, performing a population balance for these elements leads to the Danckwerts surface age distribution. The basic quantum is what has been traditionally called the rate of surface renewal. The Higbie surface age distribution follows if the renewal frequency distribution of such elements is assumed to be continuous. Four age distributions, which reflect different start-up conditions of the absorption process, are then used to analyse transient physical gas absorption into a large volume of liquid, assuming negligible gas-side mass-transfer resistance. The first two are different versions of the Danckwerts model, the third one is based on the uniform and Higbie distributions, while the fourth one is a mixed distribution. For the four cases, theoretical expressions are derived for the rates of gas absorption and dissolved-gas transfer to the bulk liquid. Under transient conditions, these two rates are not equal and have an inverse relationship. However, with the progress of absorption towards steady state, they approach one another. Assuming steady-state conditions, the conventional one-parameter Danckwerts age distribution is generalized to a two-parameter age distribution. Like the two-parameter logarithmic normal distribution, this distribution can also capture the bell-shaped nature of the distribution of the ages of surface elements observed experimentally in air–sea gas and heat exchange. Estimates of the liquid-side mass-transfer coefficient made using these two distributions for the absorption of hydrogen and oxygen in water are very close to one another and are comparable to experimental values reported in the literature. PMID:28791137
NASA Technical Reports Server (NTRS)
Bahrami, Parviz A.
1996-01-01
Theoretical analysis and numerical computations are performed to set forth a new model of film condensation on a horizontal cylinder. The model is more general than the well-known Nusselt model of film condensation and is designed to encompass all essential features of the Nusselt model. It is shown that a single parameter, constructed explicitly and without specification of the cylinder wall temperature, determines the degree of departure from the Nusselt model, which assumes a known and uniform wall temperature. It is also known that the Nusselt model is reached for very small, as well as very large, values of this parameter. In both limiting cases the cylinder wall temperature assumes a uniform distribution and the Nusselt model is approached. The maximum deviations between the two models is rather small for cases which are representative of cylinder dimensions, materials and conditions encountered in practice.
Liao, Guan-Bo; Chen, Yin-Quan; Bareil, Paul B; Sheng, Yunlong; Chiou, Arthur; Chang, Ming-Shien
2014-10-01
We calculated the three-dimensional optical stress distribution and the resulting deformation on a biconcave human red blood cell (RBC) in a pair of parallel optical trap. We assumed a Gaussian intensity distribution with a spherical wavefront for each trapping beam and calculated the optical stress from the momentum transfer associated with the reflection and refraction of the incident photons at each interface. The RBC was modelled as a biconcave thin elastic membrane with uniform elasticity and a uniform thickness of 0.25 μm. The resulting cell deformation was determined from the optical stress distribution by finite element software, Comsol Structure Mechanics Module, with Young's modulus (E) as a fitting parameter in order to fit the theoretical results for cell elongation to our experimental data. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hualin, E-mail: hualin.zhang@northwestern.edu; Donnelly, Eric D.; Strauss, Jonathan B.
Purpose: To evaluate high-dose-rate (HDR) vaginal cuff brachytherapy (VCBT) in the treatment of endometrial cancer in a cylindrical target volume with either a varied or a constant cancer cell distributions using the linear quadratic (LQ) model. Methods: A Monte Carlo (MC) technique was used to calculate the 3D dose distribution of HDR VCBT over a variety of cylinder diameters and treatment lengths. A treatment planning system (TPS) was used to make plans for the various cylinder diameters, treatment lengths, and prescriptions using the clinical protocol. The dwell times obtained from the TPS were fed into MC. The LQ model wasmore » used to evaluate the therapeutic outcome of two brachytherapy regimens prescribed either at 0.5 cm depth (5.5 Gy × 4 fractions) or at the vaginal mucosal surface (8.8 Gy × 4 fractions) for the treatment of endometrial cancer. An experimentally determined endometrial cancer cell distribution, which showed a varied and resembled a half-Gaussian distribution, was used in radiobiology modeling. The equivalent uniform dose (EUD) to cancer cells was calculated for each treatment scenario. The therapeutic ratio (TR) was defined by comparing VCBT with a uniform dose radiotherapy plan in term of normal cell survival at the same level of cancer cell killing. Calculations of clinical impact were run twice assuming two different types of cancer cell density distributions in the cylindrical target volume: (1) a half-Gaussian or (2) a uniform distribution. Results: EUDs were weakly dependent on cylinder size, treatment length, and the prescription depth, but strongly dependent on the cancer cell distribution. TRs were strongly dependent on the cylinder size, treatment length, types of the cancer cell distributions, and the sensitivity of normal tissue. With a half-Gaussian distribution of cancer cells which populated at the vaginal mucosa the most, the EUDs were between 6.9 Gy × 4 and 7.8 Gy × 4, the TRs were in the range from (5.0){sup 4} to (13.4){sup 4} for the radiosensitive normal tissue depending on the cylinder size, treatment lengths, prescription depth, and dose as well. However, for a uniform cancer cell distribution, the EUDs were between 6.3 Gy × 4 and 7.1 Gy × 4, and the TRs were found to be between (1.4){sup 4} and (1.7){sup 4}. For the uniformly interspersed cancer and radio-resistant normal cells, the TRs were less than 1. The two VCBT prescription regimens were found to be equivalent in terms of EUDs and TRs. Conclusions: HDR VCBT strongly favors cylindrical target volume with the cancer cell distribution following its dosimetric trend. Assuming a half-Gaussian distribution of cancer cells, the HDR VCBT provides a considerable radiobiological advantage over the external beam radiotherapy (EBRT) in terms of sparing more normal tissues while maintaining the same level of cancer cell killing. But for the uniform cancer cell distribution and radio-resistant normal tissue, the radiobiology outcome of the HDR VCBT does not show an advantage over the EBRT. This study strongly suggests that radiation therapy design should consider the cancer cell distribution inside the target volume in addition to the shape of target.« less
NASA Astrophysics Data System (ADS)
Liang, Cheng-Yen
Micromagnetic simulations of magnetoelastic nanostructures traditionally rely on either the Stoner-Wohlfarth model or the Landau-Lifshitz-Gilbert (LLG) model assuming uniform strain (and/or assuming uniform magnetization). While the uniform strain assumption is reasonable when modeling magnetoelastic thin films, this constant strain approach becomes increasingly inaccurate for smaller in-plane nanoscale structures. In this dissertation, a fully-coupled finite element micromagnetic method is developed. The method deals with the micromagnetics, elastodynamics, and piezoelectric effects. The dynamics of magnetization, non-uniform strain distribution, and electric fields are iteratively solved. This more sophisticated modeling technique is critical for guiding the design process of the nanoscale strain-mediated multiferroic elements such as those needed in multiferroic systems. In this dissertation, we will study magnetic property changes (e.g., hysteresis, coercive field, and spin states) due to strain effects in nanostructures. in addition, a multiferroic memory device is studied. The electric-field-driven magnetization switching by applying voltage on patterned electrodes simulation in a nickel memory device is shown in this work. The deterministic control law for the magnetization switching in a nanoring with electric field applied to the patterned electrodes is investigated. Using the patterned electrodes, we show that strain-induced anisotropy is able to be controlled, which changes the magnetization deterministically in a nano-ring.
Bomphrey, Richard J; Taylor, Graham K; Lawson, Nicholas J; Thomas, Adrian L.R
2005-01-01
Actuator disc models of insect flight are concerned solely with the rate of momentum transfer to the air that passes through the disc. These simple models assume that an even pressure is applied across the disc, resulting in a uniform downwash distribution. However, a correction factor, k, is often included to correct for the difference in efficiency between the assumed even downwash distribution, and the real downwash distribution. In the absence of any empirical measurements of the downwash distribution behind a real insect, the values of k used in the literature have been necessarily speculative. Direct measurement of this efficiency factor is now possible, and could be used to compare the relative efficiencies of insect flight across the Class. Here, we use Digital Particle Image Velocimetry to measure the instantaneous downwash distribution, mid-downstroke, of a tethered desert locust (Schistocerca gregaria). By integrating the downwash distribution, we are thereby able to provide the first direct empirical measurement of k for an insect. The measured value of k=1.12 corresponds reasonably well with that predicted by previous theoretical studies. PMID:16849240
Effects of Fuel Distribution on Detonation Tube Performance
NASA Technical Reports Server (NTRS)
Perkins, H. Douglas; Sung, Chih-Jen
2003-01-01
A pulse detonation engine uses a series of high frequency intermittent detonation tubes to generate thrust. The process of filling the detonation tube with fuel and air for each cycle may yield non-uniform mixtures. Uniform mixing is commonly assumed when calculating detonation tube thrust performance. In this study, detonation cycles featuring idealized non-uniform Hz/air mixtures were analyzed using a two-dimensional Navier-Stokes computational fluid dynamics code with detailed chemistry. Mixture non-uniformities examined included axial equivalence ratio gradients, transverse equivalence ratio gradients, and partially fueled tubes. Three different average test section equivalence ratios were studied; one stoichiometric, one fuel lean, and one fuel rich. All mixtures were detonable throughout the detonation tube. Various mixtures representing the same average test section equivalence ratio were shown to have specific impulses within 1% of each other, indicating that good fuel/air mixing is not a prerequisite for optimal detonation tube performance under conditions investigated.
Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach
NASA Technical Reports Server (NTRS)
Mata, Carlos T.; Rakov, V. A.
2008-01-01
There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate origins of downward propagating leaders and a lognormal distribution to generate returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for 10,000 years with an assumed ground flash density and peak current distributions, and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.
Modulus reconstruction from prostate ultrasound images using finite element modeling
NASA Astrophysics Data System (ADS)
Yan, Zhennan; Zhang, Shaoting; Alam, S. Kaisar; Metaxas, Dimitris N.; Garra, Brian S.; Feleppa, Ernest J.
2012-03-01
In medical diagnosis, use of elastography is becoming increasingly more useful. However, treatments usually assume a planar compression applied to tissue surfaces and measure the deformation. The stress distribution is relatively uniform close to the surface when using a large, flat compressor but it diverges gradually along tissue depth. Generally in prostate elastography, the transrectal probes used for scanning and compression are cylindrical side-fire or rounded end-fire probes, and the force is applied through the rectal wall. These make it very difficult to detect cancer in prostate, since the rounded contact surfaces exaggerate the non-uniformity of the applied stress, especially for the distal, anterior prostate. We have developed a preliminary 2D Finite Element Model (FEM) to simulate prostate deformation in elastography. The model includes a homogeneous prostate with a stiffer tumor in the proximal, posterior region of the gland. A force is applied to the rectal wall to deform the prostate, strain and stress distributions can be computed from the resultant displacements. Then, we assume the displacements as boundary condition and reconstruct the modulus distribution (inverse problem) using linear perturbation method. FEM simulation shows that strain and strain contrast (of the lesion) decrease very rapidly with increasing depth and lateral distance. Therefore, lesions would not be clearly visible if located far away from the probe. However, the reconstructed modulus image can better depict relatively stiff lesion wherever the lesion is located.
Influence of operating conditions on the optimum design of electric vehicle battery cooling plates
NASA Astrophysics Data System (ADS)
Jarrett, Anthony; Kim, Il Yong
2014-01-01
The efficiency of cooling plates for electric vehicle batteries can be improved by optimizing the geometry of internal fluid channels. In practical operation, a cooling plate is exposed to a range of operating conditions dictated by the battery, environment, and driving behaviour. To formulate an efficient cooling plate design process, the optimum design sensitivity with respect to each boundary condition is desired. This determines which operating conditions must be represented in the design process, and therefore the complexity of designing for multiple operating conditions. The objective of this study is to determine the influence of different operating conditions on the optimum cooling plate design. Three important performance measures were considered: temperature uniformity, mean temperature, and pressure drop. It was found that of these three, temperature uniformity was most sensitive to the operating conditions, especially with respect to the distribution of the input heat flux, and also to the coolant flow rate. An additional focus of the study was the distribution of heat generated by the battery cell: while it is easier to assume that heat is generated uniformly, by using an accurate distribution for design optimization, this study found that cooling plate performance could be significantly improved.
Relativistic equation of state at subnuclear densities in the Thomas-Fermi approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Z. W.; Shen, H., E-mail: shennankai@gmail.com
We study the non-uniform nuclear matter using the self-consistent Thomas-Fermi approximation with a relativistic mean-field model. The non-uniform matter is assumed to be composed of a lattice of heavy nuclei surrounded by dripped nucleons. At each temperature T, proton fraction Y{sub p} , and baryon mass density ρ {sub B}, we determine the thermodynamically favored state by minimizing the free energy with respect to the radius of the Wigner-Seitz cell, while the nucleon distribution in the cell can be determined self-consistently in the Thomas-Fermi approximation. A detailed comparison is made between the present results and previous calculations in the Thomas-Fermimore » approximation with a parameterized nucleon distribution that has been adopted in the widely used Shen equation of state.« less
NASA Astrophysics Data System (ADS)
Shan, S. Ali; Saleem, H.
2018-05-01
Electrostatic solitary waves and double layers (DLs) formed by the coupled ion acoustic (IA) and drift waves have been investigated in non-uniform plasma using q-nonextensive distribution function for the electrons and assuming ions to be cold Ti< Te. It is found that both compressive and rarefactive nonlinear structures (solitary waves and DLs) are possible in such a system. The steeper gradients are supportive for compressive solitary (and double layers) and destructive for rarefactive ones. The q-nonextensivity parameter q and the magnitudes of gradient scale lengths of density and temperature have significant effects on the amplitude of the double layers (and double layers) as well as on the speed of these structures. This theoretical model is general which has been applied here to the F-region ionosphere for illustration.
The Logical Problem of Language Change.
1995-07-01
tribution Pi. For the most part we will assume in our simulations that this distribution is uniform on degree-0 ( unembedded ) sentences, exactly as in...following table provides the unembedded (degree- 0) sentences from each of the 8 grammars (languages) obtained by setting the 3 parameters of example 1 to di erent values. The languages are referred to as L1 through L8: 17 18
Statistics of the residual refraction errors in laser ranging data
NASA Technical Reports Server (NTRS)
Gardner, C. S.
1977-01-01
A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.
Leveraging the Cloud for Integrated Network Experimentation
2014-03-01
kernel settings, or any of the low-level subcomponents. 3. Scalable Solutions: Businesses can build scalable solutions for their clients , ranging from...values. These values 13 can assume several distributions that include normal, Pareto , uniform, exponential and Poisson, among others [21]. Additionally, D...communication, the web client establishes a connection to the server before traffic begins to flow. Web servers do not initiate connections to clients in
NASA Astrophysics Data System (ADS)
Donovan, J.; Jordan, T. H.
2012-12-01
Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).
Gagne, Nolan L; Cutright, Daniel R; Rivard, Mark J
2012-09-01
To improve tumor dose conformity and homogeneity for COMS plaque brachytherapy by investigating the dosimetric effects of varying component source ring radionuclides and source strengths. The MCNP5 Monte Carlo (MC) radiation transport code was used to simulate plaque heterogeneity-corrected dose distributions for individually-activated source rings of 14, 16 and 18 mm diameter COMS plaques, populated with (103)Pd, (125)I and (131)Cs sources. Ellipsoidal tumors were contoured for each plaque size and MATLAB programming was developed to generate tumor dose distributions for all possible ring weighting and radionuclide permutations for a given plaque size and source strength resolution, assuming a 75 Gy apical prescription dose. These dose distributions were analyzed for conformity and homogeneity and compared to reference dose distributions from uniformly-loaded (125)I plaques. The most conformal and homogeneous dose distributions were reproduced within a reference eye environment to assess organ-at-risk (OAR) doses in the Pinnacle(3) treatment planning system (TPS). The gamma-index analysis method was used to quantitatively compare MC and TPS-generated dose distributions. Concentrating > 97% of the total source strength in a single or pair of central (103)Pd seeds produced the most conformal dose distributions, with tumor basal doses a factor of 2-3 higher and OAR doses a factor of 2-3 lower than those of corresponding uniformly-loaded (125)I plaques. Concentrating 82-86% of the total source strength in peripherally-loaded (131)Cs seeds produced the most homogeneous dose distributions, with tumor basal doses 17-25% lower and OAR doses typically 20% higher than those of corresponding uniformly-loaded (125)I plaques. Gamma-index analysis found > 99% agreement between MC and TPS dose distributions. A method was developed to select intra-plaque ring radionuclide compositions and source strengths to deliver more conformal and homogeneous tumor dose distributions than uniformly-loaded (125)I plaques. This method may support coordinated investigations of an appropriate clinical target for eye plaque brachytherapy.
Elastohydrodynamics of microfilament under distributed body actuation
NASA Astrophysics Data System (ADS)
Singh, T. Sonamani; Yadava, R. D. S.
2018-05-01
The dynamics of an active filament in low Reynolds (Re) number regime is analyzed under distributed body actuation represented by the sliding filament model. The governing elastohydrodynamic equations are formulated by assuming the resistive force theory (RFT). The effect of geometric nonlinearity in bending stiffness on the propulsive thrust has been analyzed where the former is introduced by cross-sectional tapering. Two types of boundary conditions (clamped-free and hinged-free) are analyzed. A comparison with the uniform filament dynamics reveals that the tapering enhances the thrust under both types of boundary conditions.
Buckling Tests with a Spar-rib Grill
NASA Technical Reports Server (NTRS)
Weinhold, Josef
1940-01-01
The present report deals with a comparison of mathematically and experimentally defined buckling loads of a spar-rib grill, on the assumption of constant spar section, and infinitely closely spaced ribs with rigidity symmetrical to the grill center. The loads are applied as equal bending moments at both spar ends, as compression in the line connecting the joints, and in the spar center line as the assumedly uniformly distributed spar weight.
Theoretical analysis of nonnuniform skin effects on drawdown variation
NASA Astrophysics Data System (ADS)
Chen, C.-S.; Chang, C. C.; Lee, M. S.
2003-04-01
Under field conditions, the skin zone surrounding the well screen is rarely uniformly distributed in the vertical direction. To understand such non-uniform skin effects on drawdown variation, we assume the skin factor to be an arbitrary, continuous or piece-wise continuous function S_k(z), and incorporate it into a well hydraulics model for constant rate pumping in a homogeneous, vertically anisotropic, confined aquifer. Solutions of depth-specific drawdown and vertical average drawdown are determined by using the Gram-Schmidt method. The non-uniform effects of S_k(z) in vertical average drawdown are averaged out, and can be represented by a constant skin factor S_k. As a result, drawdown of fully penetrating observation wells can be analyzed by appropriate well hydraulics theories assuming a constant skin factor. The S_k is the vertical average value of S_k(z) weighted by the well bore flux q_w(z). In depth-specific drawdown, however, the non-uniform effects of S_k(z) vary with radial and vertical distances, which are under the influence of the vertical profile of S_k(z) and the vertical anisotropy ratio, K_r/K_z. Therefore, drawdown of partially penetrating observation wells may reflect the vertical anisotropy as well as the non-uniformity of the skin zone. The method of determining S_k(z) developed herein involves the use of q_w(z) as can be measured with the borehole flowmeter, and K_r/K_z and S_k as can be determined by the conventional pumping test.
Formation and evolution of magnetised filaments in wind-swept turbulent clumps
NASA Astrophysics Data System (ADS)
Banda-Barragan, Wladimir Eduardo; Federrath, Christoph; Crocker, Roland M.; Bicknell, Geoffrey Vincent; Parkin, Elliot Ross
2015-08-01
Using high-resolution three-dimensional simulations, we examine the formation and evolution of filamentary structures arising from magnetohydrodynamic interactions between supersonic winds and turbulent clumps in the interstellar medium. Previous numerical studies assumed homogenous density profiles, null velocity fields, and uniformly distributed magnetic fields as the initial conditions for interstellar clumps. Here, we have, for the first time, incorporated fractal clumps with log-normal density distributions, random velocity fields and turbulent magnetic fields (superimposed on top of a uniform background field). Disruptive processes, instigated by dynamical instabilities and akin to those observed in simulations with uniform media, lead to stripping of clump material and the subsequent formation of filamentary tails. The evolution of filaments in uniform and turbulent models is, however, radically different as evidenced by comparisons of global quantities in both scenarios. We show, for example, that turbulent clumps produce tails with higher velocity dispersions, increased gas mixing, greater kinetic energy, and lower plasma beta than their uniform counterparts. We attribute the observed differences to: 1) the turbulence-driven enhanced growth of dynamical instabilities (e.g. Kelvin-Helmholtz and Rayleigh-Taylor instabilities) at fluid interfaces, and 2) the localised amplification of magnetic fields caused by the stretching of field lines trapped in the numerous surface deformations of fractal clumps. We briefly discuss the implications of this work to the physics of the optical filaments observed in the starburst galaxy M82.
NASA Technical Reports Server (NTRS)
Economos, A. C.; Miquel, J.
1979-01-01
A simple physiological model of mortality kinetics is used to assess the intuitive concept that the aging rates of populations are proportional to their mortality rates. It is assumed that the vitality of an individual can be expressed as a simple summation of the weighted functional capacities of its organs and homeostatic systems that are indispensable for survival. It is shown that the mortality kinetics of a population can be derived by a linear transformation of the frequency distribution of vitality, assuming a uniform constant rate of decline of the physiological functions. A simple comparison of two populations is not possible when they have different vitality frequency distributions. Analysis of the data using the model suggests that the differences in decline of survivorship with age between the military pilot population, a medically insured population, and the control population can be accounted for by the effect of physical selection on the vitality frequency distribution of the screened populations.
Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach
NASA Technical Reports Server (NTRS)
Mata, Carlos T.; Rakov, V. A.
2008-01-01
There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate the origin of downward propagating leaders and a lognormal distribution to generate the corresponding returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for N number of years with an assumed ground flash density and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.
Eigenmodes of Ducted Flows With Radially-Dependent Axial and Swirl Velocity Components
NASA Technical Reports Server (NTRS)
Kousen, Kenneth A.
1999-01-01
This report characterizes the sets of small disturbances possible in cylindrical and annular ducts with mean flow whose axial and tangential components vary arbitrarily with radius. The linearized equations of motion are presented and discussed, and then exponential forms for the axial, circumferential, and time dependencies of any unsteady disturbances are assumed. The resultant equations form a generalized eigenvalue problem, the solution of which yields the axial wavenumbers and radial mode shapes of the unsteady disturbances. Two numerical discretizations are applied to the system of equations: (1) a spectral collocation technique based on Chebyshev polynomial expansions on the Gauss-Lobatto points, and (2) second and fourth order finite differences on uniform grids. The discretized equations are solved using a standard eigensystem package employing the QR algorithm. The eigenvalues fall into two primary categories: a discrete set (analogous to the acoustic modes found in uniform mean flows) and a continuous band (analogous to convected disturbances in uniform mean flows) where the phase velocities of the disturbances correspond to the local mean flow velocities. Sample mode shapes and eigensystem distributions are presented for both sheared axial and swirling flows. The physics of swirling flows is examined with reference to hydrodynamic stability and completeness of the eigensystem expansions. The effect of assuming exponential dependence in the axial direction is discussed.
Stability of compressible Taylor-Couette flow
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Chow, Chuen-Yen
1991-01-01
Compressible stability equations are solved using the spectral collocation method in an attempt to study the effects of temperature difference and compressibility on the stability of Taylor-Couette flow. It is found that the Chebyshev collocation spectral method yields highly accurate results using fewer grid points for solving stability problems. Comparisons are made between the result obtained by assuming small Mach number with a uniform temperature distribution and that based on fully incompressible analysis.
NASA Astrophysics Data System (ADS)
Cao, M.-H.; Jiang, H.-K.; Chin, J.-S.
1982-04-01
An improved flat-fan spray model is used for the semi-empirical analysis of liquid fuel distribution downstream of a plain orifice injector under cross-stream air flow. The model assumes that, due to the aerodynamic force of the high-velocity cross air flow, the injected fuel immediately forms a flat-fan liquid sheet perpendicular to the cross flow. Once the droplets have been formed, the trajectories of individual droplets determine fuel distribution downstream. Comparison with test data shows that the proposed model accurately predicts liquid fuel distribution at any point downstream of a plain orifice injector under high-velocity, low-temperature uniform cross-stream air flow over a wide range of conditions.
NASA Astrophysics Data System (ADS)
Glas, Frank
2003-06-01
We give a fully analytical solution for the displacement and strain fields generated by the coherent elastic relaxation of a type of misfitting inclusions with uniform dilatational eigenstrain lying in a half space, assuming linear isotropic elasticity. The inclusion considered is an infinitely long circular cylinder having an axis parallel to the free surface and truncated by two arbitrarily positioned planes parallel to this surface. These calculations apply in particular to strained semiconductor quantum wires. The calculations are illustrated by examples showing quantitatively that, depending on the depth of the wire under the free surface, the latter may significantly affect the magnitude and the distribution of the various strain components inside the inclusion as well as in the surrounding matrix.
Behavior of dusty real gas on adiabatic propagation of cylindrical imploding strong shock waves
NASA Astrophysics Data System (ADS)
Gangwar, P. K.
2018-05-01
In this paper, CCW method has been used to study the behavior of dusty real gas on adiabatic propagation of cylindrical imploding strong shock waves. The strength of overtaking waves is estimated under the assumption that both C+ and C- disturbances propagate in non-uniform region of same density distribution. It is assumed that the dusty gas is the mixture of a real gas and a large number of small spherical solid particles of uniform size. The solid particles are uniformly distributed in the medium. Maintaining equilibrium flow conditions, the expressions for shock strength has been derived both for freely propagation as well as under the effect of overtaking disturbances. The variation of all flow variables with propagation distance, mass concentration of solid particles in the mixture and the ratio of solid particles to the initial density of gas have been computed and discussed through graphs. It is found that the presence of dust particles in the gases medium has significant effects on the variation of flow variables and the shock is strengthened under the influence of overtaking disturbances. The results accomplished here been compared with those for ideal gas.
Adapting radiotherapy to hypoxic tumours
NASA Astrophysics Data System (ADS)
Malinen, Eirik; Søvik, Åste; Hristov, Dimitre; Bruland, Øyvind S.; Rune Olsen, Dag
2006-10-01
In the current work, the concepts of biologically adapted radiotherapy of hypoxic tumours in a framework encompassing functional tumour imaging, tumour control predictions, inverse treatment planning and intensity modulated radiotherapy (IMRT) were presented. Dynamic contrast enhanced magnetic resonance imaging (DCEMRI) of a spontaneous sarcoma in the nasal region of a dog was employed. The tracer concentration in the tumour was assumed related to the oxygen tension and compared to Eppendorf histograph measurements. Based on the pO2-related images derived from the MR analysis, the tumour was divided into four compartments by a segmentation procedure. DICOM structure sets for IMRT planning could be derived thereof. In order to display the possible advantages of non-uniform tumour doses, dose redistribution among the four tumour compartments was introduced. The dose redistribution was constrained by keeping the average dose to the tumour equal to a conventional target dose. The compartmental doses yielding optimum tumour control probability (TCP) were used as input in an inverse planning system, where the planning basis was the pO2-related tumour images from the MR analysis. Uniform (conventional) and non-uniform IMRT plans were scored both physically and biologically. The consequences of random and systematic errors in the compartmental images were evaluated. The normalized frequency distributions of the tracer concentration and the pO2 Eppendorf measurements were not significantly different. 28% of the tumour had, according to the MR analysis, pO2 values of less than 5 mm Hg. The optimum TCP following a non-uniform dose prescription was about four times higher than that following a uniform dose prescription. The non-uniform IMRT dose distribution resulting from the inverse planning gave a three times higher TCP than that of the uniform distribution. The TCP and the dose-based plan quality depended on IMRT parameters defined in the inverse planning procedure (fields and step-and-shoot intensity levels). Simulated random and systematic errors in the pO2-related images reduced the TCP for the non-uniform dose prescription. In conclusion, improved tumour control of hypoxic tumours by dose redistribution may be expected following hypoxia imaging, tumour control predictions, inverse treatment planning and IMRT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brannon, R.M.
1996-12-31
A mathematical framework is developed for the study of materials containing axisymmetric inclusions or flaws such as ellipsoidal voids, penny-shaped cracks, or fibers of circular cross-section. The general case of nonuniform statistical distributions of such heterogeneities is attacked by first considering a spatially uniform distribution of flaws that are all oriented in the same direction. Assuming an isotropic substrate, the macroscopic material properties of this simpler microstructure naturally should be transversely isotropic. An orthogonal basis for the linear subspace consisting of all double-symmetric transversely-isotropic fourth-order tensors associated with a given material vector is applied to deduce the explicit functional dependencemore » of the material properties of these aligned materials on the shared symmetry axis. The aligned and uniform microstructure seems geometrically simple enough that the macroscopic transversely isotropic properties could be derived in closed form. Since the resulting properties are transversely isotropic, the analyst must therefore be able to identify the appropriate coefficients of the transverse basis. Once these functions are identified, a principle of superposition of strain rates ay be applied to define an expectation integral for the composite properties of a material containing arbitrary anisotropic distributions of axisymmetric inhomogeneities. A proposal for coupling plastic anisotropy to the elastic anisotropy is presented in which the composite yield surface is interpreted as a distortion of the isotropic substrate yield surface; the distortion directions are coupled to the elastic anisotropy directions. Finally, some commonly assumed properties (such as major symmetry) of the Cauchy tangent stiffness tensor are shown to be inappropriate for large distortions of anisotropic materials.« less
Finite Element Aircraft Simulation of Turbulence
NASA Technical Reports Server (NTRS)
McFarland, R. E.
1997-01-01
A turbulence model has been developed for realtime aircraft simulation that accommodates stochastic turbulence and distributed discrete gusts as a function of the terrain. This model is applicable to conventional aircraft, V/STOL aircraft, and disc rotor model helicopter simulations. Vehicle angular activity in response to turbulence is computed from geometrical and temporal relationships rather than by using the conventional continuum approximations that assume uniform gust immersion and low frequency responses. By using techniques similar to those recently developed for blade-element rotor models, the angular-rate filters of conventional turbulence models are not required. The model produces rotational rates as well as air mass translational velocities in response to both stochastic and deterministic disturbances, where the discrete gusts and turbulence magnitudes may be correlated with significant terrain features or ship models. Assuming isotropy, a two-dimensional vertical turbulence field is created. A novel Gaussian interpolation technique is used to distribute vertical turbulence on the wing span or lateral rotor disc, and this distribution is used to compute roll responses. Air mass velocities are applied at significant centers of pressure in the computation of the aircraft's pitch and roll responses.
Accretion rates of protoplanets. II - Gaussian distributions of planetesimal velocities
NASA Technical Reports Server (NTRS)
Greenzweig, Yuval; Lissauer, Jack J.
1992-01-01
In the present growth-rate calculations for a protoplanet that is embedded in a disk of planetesimals with triaxial Gaussian velocity dispersion and uniform surface density, the protoplanet is on a circular orbit. The accretion rate in the two-body approximation is found to be enhanced by a factor of about 3 relative to the case where all planetesimals' eccentricities and inclinations are equal to the rms values of those disk variables having locally Gaussian velocity dispersion. This accretion-rate enhancement should be incorporated by all models that assume a single random velocity for all planetesimals in lieu of a Gaussian distribution.
Herman, Benjamin R; Gross, Barry; Moshary, Fred; Ahmed, Samir
2008-04-01
We investigate the assessment of uncertainty in the inference of aerosol size distributions from backscatter and extinction measurements that can be obtained from a modern elastic/Raman lidar system with a Nd:YAG laser transmitter. To calculate the uncertainty, an analytic formula for the correlated probability density function (PDF) describing the error for an optical coefficient ratio is derived based on a normally distributed fractional error in the optical coefficients. Assuming a monomodal lognormal particle size distribution of spherical, homogeneous particles with a known index of refraction, we compare the assessment of uncertainty using a more conventional forward Monte Carlo method with that obtained from a Bayesian posterior PDF assuming a uniform prior PDF and show that substantial differences between the two methods exist. In addition, we use the posterior PDF formalism, which was extended to include an unknown refractive index, to find credible sets for a variety of optical measurement scenarios. We find the uncertainty is greatly reduced with the addition of suitable extinction measurements in contrast to the inclusion of extra backscatter coefficients, which we show to have a minimal effect and strengthens similar observations based on numerical regularization methods.
A new model of the lunar ejecta cloud
NASA Astrophysics Data System (ADS)
Christou, A. A.
2014-04-01
Every airless body in the solar system is surrounded by a cloud of ejecta produced by the impact of interplanetary meteoroids on its surface [1]. Such "dust exospheres" have been observed around the Galilean satellites of Jupiter [2, 3]. The prospect of long-term robotic and human operations on the Moon by the US and other countries has rekindled interest on the subject [4]. This interest has culminated with the recent investigation of the Moon's dust exosphere by the LADEE spacecraft [5]. Here a model is presented of a ballistic, collisionless, steady state population of ejecta launched vertically at randomly distributed times and velocities. Assuming a uniform distribution of launch times I derive closed form solutions for the probability density functions (pdfs) of the height distribution of particles and the distribution of their speeds in a rest frame both at the surface and at altitude. The treatment is then extended to particle motion with respect to a moving platform such as an orbiting spacecraft. These expressions are compared with numerical simulations under lunar surface gravity where the underlying ejection speed distribution is (a) uniform (b) a power law. I discuss the predictions of the model, its limitations, and how it can be validated against near-surface and orbital measurements.
NASA Astrophysics Data System (ADS)
Motaghedi-Larijani, Arash; Aminnayeri, Majid
2017-03-01
Cross-docking is a supply-chain strategy that can reduce transportation and inventory costs. This study is motivated by a fruit and vegetable distribution centre in Tehran, which has cross-docks and a limited time to admit outbound trucks. In this article, outbound trucks are assumed to arrive at the cross-dock with a single outbound door with a uniform distribution (0,L). The total number of assigned trucks is constant and the loading time is fixed. A queuing model is modified for this situation and the expected waiting time of each customer is calculated. Then, a curve for the waiting time is calculated. Finally, the length of window time L is optimized to minimize the total cost, which includes the waiting time of the trucks and the admission cost of the cross-dock. Some illustrative examples of cross-docking are presented and solved using the proposed method.
User's Manual for Thermal Analysis Program of Axially Grooved Heat Pipe (HTGAP)
NASA Technical Reports Server (NTRS)
Kamotani, Y.
1978-01-01
A computer program that numerically predicts the steady state temperature distribution inside an axially grooved heat pipe wall for a given groove geometry and working fluid under various heat input and output modes is described. The program computes both evaporator and condenser film coefficients. The program is able to handle both axisymmetric and nonaxisymmetric heat transfer cases. Non-axisymmetric heat transfer results either from non-uniform input at the evaporator or non-uniform heat removal from the condenser, or from both. The presence of a liquid pool in the condenser region under one-g condition also causes non-axisymmetric heat transfer, and its effect on the pipe wall temperature distribution is included in the present program. The hydrodynamic aspect of an axially grooved heat pipe is studied in the Groove Analysis Program (GAP). The present thermal analysis program assumes that the GAP program (or other similar programs) is run first so that the heat transport limit and optimum fluid charge of the heat pipe are known a priori.
NASA Astrophysics Data System (ADS)
Palakurthi, Nikhil Kumar; Ghia, Urmila; Comer, Ken
2013-11-01
Capillary penetration of liquid through fibrous porous media is important in many applications such as printing, drug delivery patches, sanitary wipes, and performance fabrics. Historically, capillary transport (with a distinct liquid propagating front) in porous media is modeled using capillary-bundle theory. However, it is not clear if the capillary model (Washburn equation) describes the fluid transport in porous media accurately, as it assumes uniformity of pore sizes in the porous medium. The present work investigates the limitations of the applicability of the capillary model by studying liquid penetration through virtual fibrous media with uniform and non-uniform pore-sizes. For the non-uniform-pore fibrous medium, the effective capillary radius of the fibrous medium was estimated from the pore-size distribution curve. Liquid penetration into the 3D virtual fibrous medium at micro-scale was simulated using OpenFOAM, and the numerical results were compared with the Washburn-equation capillary-model predictions. Preliminary results show that the Washburn equation over-predicts the height rise in the early stages (purely inertial and visco-inertial stages) of capillary transport.
Nonlinear periodic wavetrains in thin liquid films falling on a uniformly heated horizontal plate
NASA Astrophysics Data System (ADS)
Issokolo, Remi J. Noumana; Dikandé, Alain M.
2018-05-01
A thin liquid film falling on a uniformly heated horizontal plate spreads into fingering ripples that can display a complex dynamics ranging from continuous waves, nonlinear spatially localized periodic wave patterns (i.e., rivulet structures) to modulated nonlinear wavetrain structures. Some of these structures have been observed experimentally; however, conditions under which they form are still not well understood. In this work, we examine profiles of nonlinear wave patterns formed by a thin liquid film falling on a uniformly heated horizontal plate. For this purpose, the Benney model is considered assuming a uniform temperature distribution along the film propagation on the horizontal surface. It is shown that for strong surface tension but a relatively small Biot number, spatially localized periodic-wave structures can be analytically obtained by solving the governing equation under appropriate conditions. In the regime of weak nonlinearity, a multiple-scale expansion combined with the reductive perturbation method leads to a complex Ginzburg-Landau equation: the solutions of which are modulated periodic pulse trains which amplitude and width and period are expressed in terms of characteristic parameters of the model.
Design, development and manufacture of a breadboard radio frequency mass gauging system
NASA Technical Reports Server (NTRS)
1975-01-01
The feasibility of the RF gauging mode, counting technique was demonstrated for gauging liquid hydrogen and liquid oxygen under all attitude conditions. With LH2, it was also demonstrated under dynamic fluid conditions, in which the fluid assumes ever changing positions within the tank, that the RF gauging technique on the average provides a very good indication of mass. It is significant that the distribution of the mode count data at each fill level during dynamic LH2 and LOX orientation testing does approach a statistical normal distribution. Multiple space-diversity probes provide better coupling to the resonant modes than utilization of a single probe element. The variable sweep rate generator technique provides a more uniform mode versus time distribution for processing.
A Hybrid Model for Multiscale Laser Plasma Simulations with Detailed Collisional Physics
2016-11-29
quantum calculations with corrections for low temperature NIST Cutoff • Starts with LANL and assumes higher excited states are ionized • Cutoff... NIST Grouping • Boltzmann or Uniform grouping • Saves 20-30% over Electron Splitting • Case by case basis 11Distribution A – Approved for public release...Temperature: 0.035 eV • Atomic Density: 1020 1/m3 • Ionization fraction: 10-13 • Electron Temperature: 10 & 100 eV • t = [0,106] seconds Groupings • NIST
Brivio, D; Nguyen, P L; Sajo, E; Ngwa, W; Zygmanski, P
2017-03-07
We investigate via Monte Carlo simulations a new 125 I brachytherapy treatment technique for high-risk prostate cancer patients via injection of Au nanoparticle (AuNP) directly into the prostate. The purpose of using the nanoparticles is to increase the therapeutic index via two synergistic effects: enhanced energy deposition within the prostate and simultaneous shielding of organs at risk from radiation escaping from the prostate. Both uniform and non-uniform concentrations of AuNP are studied. The latter are modeled considering the possibility of AuNP diffusion after the injection using brachy needles. We study two extreme cases of coaxial AuNP concentrations: centered on brachy needles and centered half-way between them. Assuming uniform distribution of 30 mg g -1 of AuNP within the prostate, we obtain a dose enhancement larger than a factor of 2 to the prostate. Non-uniform concentration of AuNP ranging from 10 mg g -1 and 66 mg g -1 were studied. The higher the concentration in a given region of the prostate the greater is the enhancement therein. We obtain the highest dose enhancement when the brachytherapy needles are coincident with AuNP injection needles but, at the same time, the regions in the tail are colder (average dose ratio of 0.7). The best enhancement uniformity is obtained with the seeds in the tail of the AuNP distribution. In both uniform and non-uniform cases the urethra and rectum receive less than 1/3 dose compared to an analog treatment without AuNP. Remarkably, employing AuNP not only significantly increases dose to the target but also decreases dose to the neighboring rectum and even urethra, which is embedded within the prostate. These are mutually interdependent effects as more enhancement leads to more shielding and vice-versa. Caution must be paid since cold spot or hot spots may be created if the AuNP concentration versus seed position is not properly distributed respect to the seed locations.
NASA Astrophysics Data System (ADS)
Brivio, D.; Nguyen, P. L.; Sajo, E.; Ngwa, W.; Zygmanski, P.
2017-03-01
We investigate via Monte Carlo simulations a new 125I brachytherapy treatment technique for high-risk prostate cancer patients via injection of Au nanoparticle (AuNP) directly into the prostate. The purpose of using the nanoparticles is to increase the therapeutic index via two synergistic effects: enhanced energy deposition within the prostate and simultaneous shielding of organs at risk from radiation escaping from the prostate. Both uniform and non-uniform concentrations of AuNP are studied. The latter are modeled considering the possibility of AuNP diffusion after the injection using brachy needles. We study two extreme cases of coaxial AuNP concentrations: centered on brachy needles and centered half-way between them. Assuming uniform distribution of 30 mg g-1 of AuNP within the prostate, we obtain a dose enhancement larger than a factor of 2 to the prostate. Non-uniform concentration of AuNP ranging from 10 mg g-1 and 66 mg g-1 were studied. The higher the concentration in a given region of the prostate the greater is the enhancement therein. We obtain the highest dose enhancement when the brachytherapy needles are coincident with AuNP injection needles but, at the same time, the regions in the tail are colder (average dose ratio of 0.7). The best enhancement uniformity is obtained with the seeds in the tail of the AuNP distribution. In both uniform and non-uniform cases the urethra and rectum receive less than 1/3 dose compared to an analog treatment without AuNP. Remarkably, employing AuNP not only significantly increases dose to the target but also decreases dose to the neighboring rectum and even urethra, which is embedded within the prostate. These are mutually interdependent effects as more enhancement leads to more shielding and vice-versa. Caution must be paid since cold spot or hot spots may be created if the AuNP concentration versus seed position is not properly distributed respect to the seed locations.
Cosmology with galaxy cluster phase spaces
NASA Astrophysics Data System (ADS)
Stark, Alejo; Miller, Christopher J.; Huterer, Dragan
2017-07-01
We present a novel approach to constrain accelerating cosmologies with galaxy cluster phase spaces. With the Fisher matrix formalism we forecast constraints on the cosmological parameters that describe the cosmological expansion history. We find that our probe has the potential of providing constraints comparable to, or even stronger than, those from other cosmological probes. More specifically, with 1000 (100) clusters uniformly distributed in the redshift range 0 ≤z ≤0.8 , after applying a conservative 80% mass scatter prior on each cluster and marginalizing over all other parameters, we forecast 1 σ constraints on the dark energy equation of state w and matter density parameter ΩM of σw=0.138 (0.431 ) and σΩM=0.007(0.025 ) in a flat universe. Assuming 40% mass scatter and adding a prior on the Hubble constant we can achieve a constraint on the Chevallier-Polarski-Linder parametrization of the dark energy equation of state parameters w0 and wa with 100 clusters in the same redshift range: σw 0=0.191 and σwa=2.712. Dropping the assumption of flatness and assuming w =-1 we also attain competitive constraints on the matter and dark energy density parameters: σΩ M=0.101 and σΩ Λ=0.197 for 100 clusters uniformly distributed in the range 0 ≤z ≤0.8 after applying a prior on the Hubble constant. We also discuss various observational strategies for tightening constraints in both the near and far future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelbrot, B.B.
1991-03-01
The following statements are obviously quite wrong: oil fields are circular; they are the same size and are distributed uniformly throughout the world; soil is of uniform porosity and permeability; after water has been pumped into a field it seeps through as an underground sphere. The preceding statements are so grossly incorrect that they do not even provide useful first approximations that one could improve upon by adding so-called corrective terms. For example, one gains little by starting with the notion of a uniform distribution of oil fields and then assuming it is perturbed by small Gaussian scatter. The flowmore » of water in a porous medium often fingers out in a pattern so diffuse that a sphere is not a useful point of departure in describing it. In summary, even the simplest data underlying petroleum geology exhibit very gross irregularity and unevenness. Fractal geometry is the proper geometry of manageable irregularity, fragmentation, and unevenness. It is the only workable alternative between the excessive order of the Euclidean geometry and unmanageable disorder. The main features of fractal geometry will be described and several techniques will be pointed out that show promise for the petroleum geologist.« less
Cybulski, Olgierd; Jakiela, Slawomir; Garstecki, Piotr
2015-12-01
The simplest microfluidic network (a loop) comprises two parallel channels with a common inlet and a common outlet. Recent studies that assumed a constant cross section of the channels along their length have shown that the sequence of droplets entering the left (L) or right (R) arm of the loop can present either a uniform distribution of choices (e.g., RLRLRL...) or long sequences of repeated choices (RRR...LLL), with all the intermediate permutations being dynamically equivalent and virtually equally probable to be observed. We use experiments and computer simulations to show that even small variation of the cross section along channels completely shifts the dynamics either into the strong preference for highly grouped patterns (RRR...LLL) that generate system-size oscillations in flow or just the opposite-to patterns that distribute the droplets homogeneously between the arms of the loop. We also show the importance of noise in the process of self-organization of the spatiotemporal patterns of droplets. Our results provide guidelines for rational design of systems that reproducibly produce either grouped or homogeneous sequences of droplets flowing in microfluidic networks.
The effect of wall temperature distribution on streaks in compressible turbulent boundary layer
NASA Astrophysics Data System (ADS)
Zhang, Zhao; Tao, Yang; Xiong, Neng; Qian, Fengxue
2018-05-01
The thermal boundary condition at wall is very important for the compressible flow due to the coupling of the energy equation, and a lot of research works about it were carried out in past decades. In most of these works, the wall was assumed as adiabatic or uniform isothermal surface; the flow over a thermal wall with some special temperature distribution was seldom studied. Lagha studied the effect of uniform isothermal wall on the streaks, and pointed out that higher the wall temperature is, the longer the streak (POF, 2011, 23, 015106). So, we designed streamwise stripes of wall temperature distribution on the compressible turbulent boundary layer at Mach 3.0 to learn the effect on the streaks by means of direct numerical simulation in this paper. The mean wall temperature is equal to the adiabatic case approximately, and the width of the temperature stripes is in the same order as the width of the streaks. The streak patterns in near-wall region with different temperature stripes are shown in the paper. Moreover, we find that there is a reduction of friction velocity with the wall temperature stripes when compared with the adiabatic case.
Photocounting distributions for exponentially decaying sources.
Teich, M C; Card, H C
1979-05-01
Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.
Razak, Fahad; Davey Smith, George; Subramanian, S V
2016-12-01
A mean-centric view of populations, whereby a change in the mean of a health variable at the population level is assumed to result in uniform change across the distribution, is a core component of Geoffrey Rose's concept of the "population strategy" to disease prevention. This idea also has a critical role in Rose's observation that individuals who are considered abnormal or sick (the rightward tail of the distribution) and those who are considered normal (the center) are very closely related, and that true preventive medicine must focus on shifting the normal or average. In this Perspective, we revisit these core tenets of Rose's concept of preventive medicine after providing an overview of the key concepts that he developed. We examine whether these assumptions apply to population changes in body mass index (BMI) and show that there is considerable evidence of a widening of the BMI distribution in populations over time. We argue that, with respect to BMI, the idea of using statistical measures of a population solely on the basis of means and the assumption that populations are coherent entities that change uniformly over time may not fully capture the true nature of changes in the population. These issues have important implications for how we assess and interpret the health of populations over time with implications for the balance between universal and targeted strategies aimed at improving health. © 2016 American Society for Nutrition.
NASA Astrophysics Data System (ADS)
Xinyu-Tan; Duanming-Zhang; Shengqin-Feng; Li, Zhi-hua; Li, Guan; Li, Li; Dan, Liu
2006-05-01
The dynamics characteristic and effect of atoms and particulates ejected from the surface generated by nanosecond pulsed-laser ablation are very important. In this work, based on the consideration of the inelasticity and non-uniformity of the plasma particles thermally desorbed from a plane surface into vacuum induced by nanosecond laser ablation, the one-dimensional particles flow is studied on the basis of a quasi-molecular dynamics (QMD) simulation. It is assumed that atoms and particulates ejected from the surface of a target have a Maxwell velocity distribution corresponding to the surface temperature. Particles collisions in the ablation plume. The particles mass is continuous and satisfies fractal theory distribution. Meanwhile, the particles are inelastic. Our results show that inelasticity and non-uniformity strongly affect the dynamics behavior of the particles flow. Along with the decrease of restitution coefficient e and increase of fractional dimension D, velocity distributions of plasma particles system all deviate from the initial Gaussian distribution. The increasing of dissipation energy ΔE leads to density distribution clusterized and closed up to the center mass. Predictions of the particles action based on the proposed fractal and inelasticity model are found to be in agreement with the experimental observation. This verifies the validity of the present model for the dynamics behavior of pulsed-laser-induced particles flow.
1987-10-27
on the radiosity concept1 2 - t and was simply and quickly formulated when we assumed that the power distribution across each surface was uniform. Our...power per unit area leaving A,, its radiosity B,, consists of two components. The direct emission is kIwT1 4 . The diffusely t reflected portion of the...leaving Am, the radiosity Ba, is the radiation power arriving at the aperture from the concentrator. It is given by B2 = P/A 2 = IAA4-/A 2 , (5) where
1993-09-01
geometrical ceL.•er of the expressed as follows: control volume coupled with the use of linear interpolation for internodal variation usually leads to non ...2827) fuel using the such as copper, sulfur, and nitrogen. Note that F. is a experimental data in Figs. 1 and 2. It is assumed that non -depletins...combustor concepts One case, however, exhibited a very non -uniform is that proper control of fuel-air mixing is essential for distribution of fuel liquid
Distributed Adaptive Fuzzy Control for Nonlinear Multiagent Systems Via Sliding Mode Observers.
Shen, Qikun; Shi, Peng; Shi, Yan
2016-12-01
In this paper, the problem of distributed adaptive fuzzy control is investigated for high-order uncertain nonlinear multiagent systems on directed graph with a fixed topology. It is assumed that only the outputs of each follower and its neighbors are available in the design of its distributed controllers. Equivalent output injection sliding mode observers are proposed for each follower to estimate the states of itself and its neighbors, and an observer-based distributed adaptive controller is designed for each follower to guarantee that it asymptotically synchronizes to a leader with tracking errors being semi-globally uniform ultimate bounded, in which fuzzy logic systems are utilized to approximate unknown functions. Based on algebraic graph theory and Lyapunov function approach, using Filippov-framework, the closed-loop system stability analysis is conducted. Finally, numerical simulations are provided to illustrate the effectiveness and potential of the developed design techniques.
A mesoscopic simulation on distributions of red blood cells in a bifurcating channel
NASA Astrophysics Data System (ADS)
Inoue, Yasuhiro; Takagi, Shu; Matsumoto, Yoichiro
2004-11-01
Transports of red blood cells (RBCs) or particles in bifurcated channels have been attracting renewed interest since the advent of concepts of MEMS for sorting, analyzing, and removing cells or particles from sample medium. In this talk, we present a result on a transport of red blood cells (RBCs) in a bifurcating channel studied by using a mesoscale simulation technique of immiscible droplets, where RBCs have been modeled as immiscible droplets. The distribution of RBCs is represented by the fractional RBC flux into two daughters as a function of volumetric flow ratio between the daughters. The data obtained in our simulations are examined with a theoretical prediction, in which, we assume an exponential distribution for positions of RBCs in the mother channel. The theoretical predictions show a good agreement with simulation results. A non-uniform distribution of RBCs in the mother channel affects disproportional separation of RBC flux at a bifurcation.
Estimating Small-Body Gravity Field from Shape Model and Navigation Data
NASA Technical Reports Server (NTRS)
Park, Ryan S.; Werner, Robert A.; Bhaskaran, Shyam
2008-01-01
This paper presents a method to model the external gravity field and to estimate the internal density variation of a small-body. We first discuss the modeling problem, where we assume the polyhedral shape and internal density distribution are given, and model the body interior using finite elements definitions, such as cubes and spheres. The gravitational attractions computed from these approaches are compared with the true uniform-density polyhedral attraction and the level of accuracies are presented. We then discuss the inverse problem where we assume the body shape, radiometric measurements, and a priori density constraints are given, and estimate the internal density variation by estimating the density of each finite element. The result shows that the accuracy of the estimated density variation can be significantly improved depending on the orbit altitude, finite-element resolution, and measurement accuracy.
The Galactic Nova Rate Revisited
NASA Astrophysics Data System (ADS)
Shafter, A. W.
2017-01-01
Despite its fundamental importance, a reliable estimate of the Galactic nova rate has remained elusive. Here, the overall Galactic nova rate is estimated by extrapolating the observed rate for novae reaching m≤slant 2 to include the entire Galaxy using a two component disk plus bulge model for the distribution of stars in the Milky Way. The present analysis improves on previous work by considering important corrections for incompleteness in the observed rate of bright novae and by employing a Monte Carlo analysis to better estimate the uncertainty in the derived nova rates. Several models are considered to account for differences in the assumed properties of bulge and disk nova populations and in the absolute magnitude distribution. The simplest models, which assume uniform properties between bulge and disk novae, predict Galactic nova rates of ˜50 to in excess of 100 per year, depending on the assumed incompleteness at bright magnitudes. Models where the disk novae are assumed to be more luminous than bulge novae are explored, and predict nova rates up to 30% lower, in the range of ˜35 to ˜75 per year. An average of the most plausible models yields a rate of {50}-23+31 yr-1, which is arguably the best estimate currently available for the nova rate in the Galaxy. Virtually all models produce rates that represent significant increases over recent estimates, and bring the Galactic nova rate into better agreement with that expected based on comparison with the latest results from extragalactic surveys.
NASA Astrophysics Data System (ADS)
Lu, F. X.; Huang, T. B.; Tang, W. Z.; Song, J. H.; Tong, Y. M.
A computer model have been set up for simulation of the flow and temperature field, and the radial distribution of atomic hydrogen and active carbonaceous species over a large area substrate surface for a new type dc arc plasma torch with rotating arc roots and operating at gas recycling mode A gas recycling radio of 90% was assumed. In numerical calculation of plasma chemistry, the Thermal-Calc program and a powerful thermodynamic database were employed. Numerical calculations to the computer model were performed using boundary conditions close to the experimental setup for large area diamond films deposition. The results showed that the flow and temperature field over substrate surface of Φ60-100mm were smooth and uniform. Calculations were also made with plasma of the same geometry but no arc roots rotation. It was clearly demonstrated that the design of rotating arc roots was advantageous for high quality uniform deposition of large area diamond films. Theoretical predictions on growth rate and film quality as well as their radial uniformity, and the influence of process parameters on large area diamond deposition were discussed in detail based on the spatial distribution of atomic hydrogen and the carbonaceous species in the plasma over the substrate surface obtained from thermodynamic calculations of plasma chemistry, and were compared with experimental observations.
The Effect of Roughness Model on Scattering Properties of Ice Crystals.
NASA Technical Reports Server (NTRS)
Geogdzhayev, Igor V.; Van Diedenhoven, Bastiaan
2016-01-01
We compare stochastic models of microscale surface roughness assuming uniform and Weibull distributions of crystal facet tilt angles to calculate scattering by roughened hexagonal ice crystals using the geometric optics (GO) approximation. Both distributions are determined by similar roughness parameters, while the Weibull model depends on the additional shape parameter. Calculations were performed for two visible wavelengths (864 nm and 410 nm) for roughness values between 0.2 and 0.7 and Weibull shape parameters between 0 and 1.0 for crystals with aspect ratios of 0.21, 1 and 4.8. For this range of parameters we find that, for a given roughness level, varying the Weibull shape parameter can change the asymmetry parameter by up to about 0.05. The largest effect of the shape parameter variation on the phase function is found in the backscattering region, while the degree of linear polarization is most affected at the side-scattering angles. For high roughness, scattering properties calculated using the uniform and Weibull models are in relatively close agreement for a given roughness parameter, especially when a Weibull shape parameter of 0.75 is used. For smaller roughness values, a shape parameter close to unity provides a better agreement. Notable differences are observed in the phase function over the scattering angle range from 5deg to 20deg, where the uniform roughness model produces a plateau while the Weibull model does not.
NASA Astrophysics Data System (ADS)
Vašina, P; Hytková, T; Eliáš, M
2009-05-01
The majority of current models of the reactive magnetron sputtering assume a uniform shape of the discharge current density and the same temperature near the target and the substrate. However, in the real experimental set-up, the presence of the magnetic field causes high density plasma to form in front of the cathode in the shape of a toroid. Consequently, the discharge current density is laterally non-uniform. In addition to this, the heating of the background gas by sputtered particles, which is usually referred to as the gas rarefaction, plays an important role. This paper presents an extended model of the reactive magnetron sputtering that assumes the non-uniform discharge current density and which accommodates the gas rarefaction effect. It is devoted mainly to the study of the behaviour of the reactive sputtering rather that to the prediction of the coating properties. Outputs of this model are compared with those that assume uniform discharge current density and uniform temperature profile in the deposition chamber. Particular attention is paid to the modelling of the radial variation of the target composition near transitions from the metallic to the compound mode and vice versa. A study of the target utilization in the metallic and compound mode is performed for two different discharge current density profiles corresponding to typical two pole and multipole magnetics available on the market now. Different shapes of the discharge current density were tested. Finally, hysteresis curves are plotted for various temperature conditions in the reactor.
Load Balancing in Structured P2P Networks
NASA Astrophysics Data System (ADS)
Zhu, Yingwu
In this chapter we start by addressing the importance and necessity of load balancing in structured P2P networks, due to three main reasons. First, structured P2P networks assume uniform peer capacities while peer capacities are heterogeneous in deployed P2P networks. Second, resorting to pseudo-uniformity of the hash function used to generate node IDs and data item keys leads to imbalanced overlay address space and item distribution. Lastly, placement of data items cannot be randomized in some applications (e.g., range searching). We then present an overview of load aggregation and dissemination techniques that are required by many load balancing algorithms. Two techniques are discussed including tree structure-based approach and gossip-based approach. They make different tradeoffs between estimate/aggregate accuracy and failure resilience. To address the issue of load imbalance, three main solutions are described: virtual server-based approach, power of two choices, and address-space and item balancing. While different in their designs, they all aim to improve balance on the address space and data item distribution. As a case study, the chapter discusses a virtual server-based load balancing algorithm that strives to ensure fair load distribution among nodes and minimize load balancing cost in bandwidth. Finally, the chapter concludes with future research and a summary.
Folta, James A.; Montcalm, Claude; Walton, Christopher
2003-01-01
A method and system for producing a thin film with highly uniform (or highly accurate custom graded) thickness on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source with controlled (and generally, time-varying) velocity. In preferred embodiments, the method includes the steps of measuring the source flux distribution (using a test piece that is held stationary while exposed to the source), calculating a set of predicted film thickness profiles, each film thickness profile assuming the measured flux distribution and a different one of a set of sweep velocity modulation recipes, and determining from the predicted film thickness profiles a sweep velocity modulation recipe which is adequate to achieve a predetermined thickness profile. Aspects of the invention include a practical method of accurately measuring source flux distribution, and a computer-implemented method employing a graphical user interface to facilitate convenient selection of an optimal or nearly optimal sweep velocity modulation recipe to achieve a desired thickness profile on a substrate. Preferably, the computer implements an algorithm in which many sweep velocity function parameters (for example, the speed at which each substrate spins about its center as it sweeps across the source) can be varied or set to zero.
Dressing Diversity: Politics of Difference and the Case of School Uniforms
ERIC Educational Resources Information Center
Deane, Samantha
2015-01-01
Through an analysis of school uniform policies and theories of social justice, Samantha Deane argues that school uniforms and their foregoing policies assume that confronting strangers--an imperative of living in a democratic polity--is something that requires seeing sameness instead of recognizing difference. Imbuing schooling with a directive…
Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin
2017-11-01
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Precipitating Condensation Clouds in Substellar Atmospheres
NASA Technical Reports Server (NTRS)
Ackerman, Andrew S.; Marley, Mark S.; Gore, Warren J. (Technical Monitor)
2000-01-01
We present a method to calculate vertical profiles of particle size distributions in condensation clouds of giant planets and brown dwarfs. The method assumes a balance between turbulent diffusion and precipitation in horizontally uniform cloud decks. Calculations for the Jovian ammonia cloud are compared with previous methods. An adjustable parameter describing the efficiency of precipitation allows the new model to span the range of predictions from previous models. Calculations for the Jovian ammonia cloud are found to be consistent with observational constraints. Example calculations are provided for water, silicate, and iron clouds on brown dwarfs and on a cool extrasolar giant planet.
NASA Technical Reports Server (NTRS)
Hamrock, B. J.; Dowson, D.
1974-01-01
The elastic deformation of two ellipsoidal solids in contact and subjected to Hertzian stress distribution was evaluated numerically as part of a general study of the elastic deformation of such solids in elastohydrodynamic contacts. In the analysis the contact zone was divided into equal rectangular areas, and it was assumed that a uniform pressure is applied over each rectangular area. The influence of the size of the rectangular area upon accuracy was also studied. The results indicate the distance from the center of the contact at which elastic deformation becomes insignificant.
2011-08-01
flocs within a radius of 2 flocs’ centerline would be intercepted by the settling particle . The curvilinear kernel assumes only smaller particle hit...Aerobic Sediment Slurry……………………………………………………………...11 Study 4. Modeling the Impact of Flocculation on the Fate of Organic and Inorganic Particles ...suspended particles at the beginning of free settling period………....………46 Figure 4.2: Three fOC distribution trends: small, uniform, and size-variable
NASA Technical Reports Server (NTRS)
Watkins, Charles E; Durling, Barbara J
1956-01-01
This report presents tabulated values of certain definite integral that are involved in the calculation of near-field propeller noise when the chordwise forces are assumed to be either uniform or of a Dirac delta type. The tabulations are over a wide range of operating conditions and are useful for estimating propeller noise when either the concept of an effective radius or radial distributions of forces are considered. Use of the tabulations is illustrated by several examples of calculated results for some specific propellers.
Inaniwa, T; Kanematsu, N
2015-01-07
In scanned carbon-ion (C-ion) radiotherapy, some primary C-ions undergo nuclear reactions before reaching the target and the resulting particles deliver doses to regions at a significant distance from the central axis of the beam. The effects of these particles on physical dose distribution are accounted for in treatment planning by representing the transverse profile of the scanned C-ion beam as the superposition of three Gaussian distributions. In the calculation of biological dose distribution, however, the radiation quality of the scanned C-ion beam has been assumed to be uniform over its cross-section, taking the average value over the plane at a given depth (monochrome model). Since these particles, which have relatively low radiation quality, spread widely compared to the primary C-ions, the radiation quality of the beam should vary with radial distance from the central beam axis. To represent its transverse distribution, we propose a trichrome beam model in which primary C-ions, heavy fragments with atomic number Z ≥ 3, and light fragments with Z ≤ 2 are assigned to the first, second, and third Gaussian components, respectively. Assuming a realistic beam-delivery system, we performed computer simulations using Geant4 Monte Carlo code for analytical beam modeling of the monochrome and trichrome models. The analytical beam models were integrated into a treatment planning system for scanned C-ion radiotherapy. A target volume of 20 × 20 × 40 mm(3) was defined within a water phantom. A uniform biological dose of 2.65 Gy (RBE) was planned for the target with the two beam models based on the microdosimetric kinetic model (MKM). The plans were recalculated with Geant4, and the recalculated biological dose distributions were compared with the planned distributions. The mean target dose of the recalculated distribution with the monochrome model was 2.72 Gy (RBE), while the dose with the trichrome model was 2.64 Gy (RBE). The monochrome model underestimated the RBE within the target due to the assumption of no radial variations in radiation quality. Conversely, the trichrome model accurately predicted the RBE even in a small target. Our results verify the applicability of the trichrome model for clinical use in C-ion radiotherapy treatment planning.
NASA Astrophysics Data System (ADS)
Inaniwa, T.; Kanematsu, N.
2015-01-01
In scanned carbon-ion (C-ion) radiotherapy, some primary C-ions undergo nuclear reactions before reaching the target and the resulting particles deliver doses to regions at a significant distance from the central axis of the beam. The effects of these particles on physical dose distribution are accounted for in treatment planning by representing the transverse profile of the scanned C-ion beam as the superposition of three Gaussian distributions. In the calculation of biological dose distribution, however, the radiation quality of the scanned C-ion beam has been assumed to be uniform over its cross-section, taking the average value over the plane at a given depth (monochrome model). Since these particles, which have relatively low radiation quality, spread widely compared to the primary C-ions, the radiation quality of the beam should vary with radial distance from the central beam axis. To represent its transverse distribution, we propose a trichrome beam model in which primary C-ions, heavy fragments with atomic number Z ≥ 3, and light fragments with Z ≤ 2 are assigned to the first, second, and third Gaussian components, respectively. Assuming a realistic beam-delivery system, we performed computer simulations using Geant4 Monte Carlo code for analytical beam modeling of the monochrome and trichrome models. The analytical beam models were integrated into a treatment planning system for scanned C-ion radiotherapy. A target volume of 20 × 20 × 40 mm3 was defined within a water phantom. A uniform biological dose of 2.65 Gy (RBE) was planned for the target with the two beam models based on the microdosimetric kinetic model (MKM). The plans were recalculated with Geant4, and the recalculated biological dose distributions were compared with the planned distributions. The mean target dose of the recalculated distribution with the monochrome model was 2.72 Gy (RBE), while the dose with the trichrome model was 2.64 Gy (RBE). The monochrome model underestimated the RBE within the target due to the assumption of no radial variations in radiation quality. Conversely, the trichrome model accurately predicted the RBE even in a small target. Our results verify the applicability of the trichrome model for clinical use in C-ion radiotherapy treatment planning.
Cellular dosimetry calculations for Strontium-90 using Monte Carlo code PENELOPE.
Hocine, Nora; Farlay, Delphine; Boivin, Georges; Franck, Didier; Agarande, Michelle
2014-11-01
To improve risk assessments associated with chronic exposure to Strontium-90 (Sr-90), for both the environment and human health, it is necessary to know the energy distribution in specific cells or tissue. Monte Carlo (MC) simulation codes are extremely useful tools for calculating deposition energy. The present work was focused on the validation of the MC code PENetration and Energy LOss of Positrons and Electrons (PENELOPE) and the assessment of dose distribution to bone marrow cells from punctual Sr-90 source localized within the cortical bone part. S-values (absorbed dose per unit cumulated activity) calculations using Monte Carlo simulations were performed by using PENELOPE and Monte Carlo N-Particle eXtended (MCNPX). Cytoplasm, nucleus, cell surface, mouse femur bone and Sr-90 radiation source were simulated. Cells are assumed to be spherical with the radii of the cell and cell nucleus ranging from 2-10 μm. The Sr-90 source is assumed to be uniformly distributed in cell nucleus, cytoplasm and cell surface. The comparison of S-values calculated with PENELOPE to MCNPX results and the Medical Internal Radiation Dose (MIRD) values agreed very well since the relative deviations were less than 4.5%. The dose distribution to mouse bone marrow cells showed that the cells localized near the cortical part received the maximum dose. The MC code PENELOPE may prove useful for cellular dosimetry involving radiation transport through materials other than water, or for complex distributions of radionuclides and geometries.
NASA Astrophysics Data System (ADS)
Robotham, A. S. G.; Howlett, Cullan
2018-06-01
In this short note we publish the analytic quantile function for the Navarro, Frenk & White (NFW) profile. All known published and coded methods for sampling from the 3D NFW PDF use either accept-reject, or numeric interpolation (sometimes via a lookup table) for projecting random Uniform samples through the quantile distribution function to produce samples of the radius. This is a common requirement in N-body initial condition (IC), halo occupation distribution (HOD), and semi-analytic modelling (SAM) work for correctly assigning particles or galaxies to positions given an assumed concentration for the NFW profile. Using this analytic description allows for much faster and cleaner code to solve a common numeric problem in modern astronomy. We release R and Python versions of simple code that achieves this sampling, which we note is trivial to reproduce in any modern programming language.
NASA Astrophysics Data System (ADS)
Bansal, A. R.; Anand, S. P.; Rajaram, Mita; Rao, V. K.; Dimri, V. P.
2013-09-01
The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.
NASA Astrophysics Data System (ADS)
Yakymchuk, C.; Brown, M.; Ivanic, T. J.; Korhonen, F. J.
2013-09-01
The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.
Mercury's Na Exosphere from MESSENGER Data
NASA Technical Reports Server (NTRS)
Killen, Rosemary M.; Burger, M. H.; Cassidy, T. A.; Sarantos, M.; Vervack, R. J.; McClintock, W. El; Merkel, A. W.; Sprague, A. L.; Solomon, S. C.
2012-01-01
MESSENGER entered orbit about Mercury on March 18, 2011. Since then, the Ultraviolet and Visible Spectrometer (UWS) channel of MESSENGER's Mercury Atmospheric and Surface Composition Spectrometer (MASCS) has been observing Mercury's exosphere nearly continuously. Daily measurements of Na brightness were fitted with non-uniform exospheric models. With Monte Carlo sampling we traced the trajectories of a representative number of test particles, generally one million per run per source process, until photoionization, escape from the gravitational well, or permanent sticking at the surface removed the atom from the simulation. Atoms were assumed to partially thermally accommodate on each encounter with the surface with accommodation coefficient 0.25. Runs for different assumed source processes are run separately, scaled and co-added. Once these model results were saved onto a 3D grid, we ran lines of sight from the MESSENGER spacecraft :0 infinity using the SPICE kernels and we computed brightness integrals. Note that only particles that contribute to the measurement can be constrained with our method. Atoms and molecules produced on the nightside must escape the shadow in order to scatter light if the excitation process is resonant-light scattering, as assumed here. The aggregate distribution of Na atoms fits a 1200 K gas, with a PSD distribution, along with a hotter component. Our models constrain the hot component, assumed to be impact vaporization, to be emitted with a 2500 K Maxwellian. Most orbits show a dawnside enhancement in the hot component broadly spread over the leading hemisphere. However, on some dates there is no dawn/dusk asymmetry. The portion of the hot/cold source appears to be highly variable.
Closed-form solution for Eshelby's elliptic inclusion in antiplane elasticity using complex variable
NASA Astrophysics Data System (ADS)
Chen, Y. Z.
2013-12-01
This paper provides a closed-form solution for the Eshelby's elliptic inclusion in antiplane elasticity. In the formulation, the prescribed eigenstarins are not only for the uniform distribution, but also for the linear form. After using the complex variable and the conformal mapping, the continuation condition for the traction and displacement along the interface in the physical plane can be reduced to a condition along the unit circle. The relevant complex potentials defined in the inclusion and the matrix can be separated from the continuation conditions of the traction and displacement along the interface. The expressions of the real strains and stresses in the inclusion from the assumed eigenstrains are presented. Results for the case of linear distribution of eigenstrain are first obtained in the paper.
Electroosmotic flow in a microcavity with nonuniform surface charges.
Halpern, David; Wei, Hsien-Hung
2007-08-28
In this work, we theoretically explore the characteristics of electroosmostic flow (EOF) in a microcavity with nonuniform surface charges. It is well known that a uniformly charged EOF does not give rise to flow separation because of its irrotational nature, as opposed to the classical problem of viscous flow past a cavity. However, if the cavity walls bear nonuniform surface charges, then the similitude between electric and flow fields breaks down, leading to the generation of vorticity in the cavity. Because this vorticity must necessarily diffuse into the exterior region that possesses a zero vorticity set by a uniform EOF, a new flow structure emerges. Assuming Stokes flow, we employ a boundary element method to explore how a nonuniform charge distribution along the cavity surface affects the flow structure. The results show that the stream can be susceptible to flow separation and exhibits a variety of flow structures, depending on the distributions of zeta potentials and the aspect ratio of the cavity. The interactions between patterned EOF vortices and Moffatt eddies are further demonstrated for deep cavities. This work not only has implications for electrokinetic flow induced by surface imperfections but also provides optimal strategies for achieving effective mixing in microgrooves.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-31
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-01
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042
Determining irrigation distribution uniformity and efficiency for nurseries
R. Thomas Fernandez
2010-01-01
A simple method for testing the distribution uniformity of overhead irrigation systems is described. The procedure is described step-by-step along with an example. Other uses of distribution uniformity testing are presented, as well as common situations that affect distribution uniformity and how to alleviate them.
NASA Astrophysics Data System (ADS)
Olivera, F.; Choi, J.; Socolofsky, S.
2006-12-01
Watershed responses to storm events are strongly affected by the spatial and temporal patterns of rainfall; that is, the spatial distribution of the precipitation intensity and its evolution over time. Although real storms are moving entities with non-uniform intensities in both space and time, hydrological applications often synthesize these attributes by assuming storms that are uniformly distributed and have variable intensity according to a pre-defined hyetograph shape. As one considers watersheds of greater size, the non-uniformity of rainfall becomes more important, because a storm may not cover the watershed's entire area and may not stay in the watershed for its full duration. In order to incorporate parameters such as storm area, propagation velocity and direction, and intensity distribution in the definition of synthetic storms, it is necessary to determine these storm characteristics from spatially distributed precipitation data. To date, most algorithms for identifying and tracking storms have been applied to short time-step radar reflectivity data (i.e., 15 minutes or less), where storm features are captured in an effectively synoptic manner. For the entire United States, however, the most reliable distributed precipitation data are the one-hour accumulated 4 km × 4 km gridded NEXRAD data of the U.S. National Weather Service (NWS) (NWS 2005. The one-hour aggregation level of the data, though, makes it more difficult to identify and track storms than when using sequences of synoptic radar reflectivity data, because storms can traverse over a number of NEXRAD cells and change size and shape appreciably between consecutive data maps. In this paper, we present a methodology to overcome the identification and tracking difficulties and to extract the characteristics of moving storms (e.g. size, propagation velocity and direction, and intensity distribution) from one-hour accumulated distributed rainfall data. The algorithm uses Gaussian Mixture Models (GMM) for storm identification and image processing for storm tracking. The method has been successfully applied to Brazos County in Texas using the 2003 Multi-sensor Precipitation Estimator (MPE) NEXRAD rainfall data.
NASA Astrophysics Data System (ADS)
Rana, Vijay; Gill, Kamaljit; Rudin, Stephen; Bednarek, Daniel R.
2012-03-01
The current version of the real-time skin-dose-tracking system (DTS) we have developed assumes the exposure is contained within the collimated beam and is uniform except for inverse-square variation. This study investigates the significance of factors that contribute to beam non-uniformity such as the heel effect and backscatter from the patient to areas of the skin inside and outside the collimated beam. Dose-calibrated Gafchromic film (XR-RV3, ISP) was placed in the beam in the plane of the patient table at a position 15 cm tube-side of isocenter on a Toshiba Infinix C-Arm system. Separate exposures were made with the film in contact with a block of 20-cm solid water providing backscatter and with the film suspended in air without backscatter, both with and without the table in the beam. The film was scanned to obtain dose profiles and comparison of the profiles for the various conditions allowed a determination of field non-uniformity and backscatter contribution. With the solid-water phantom and with the collimator opened completely for the 20-cm mode, the dose profile decreased by about 40% on the anode side of the field. Backscatter falloff at the beam edge was about 10% from the center and extra-beam backscatter decreased slowly with distance from the field, being about 3% of the beam maximum at 6 cm from the edge. Determination of the magnitude of these factors will allow them to be included in the skin-dose-distribution calculation and should provide a more accurate determination of peak-skin dose for the DTS.
Experimental verification of the shape of the excitation depth distribution function for AES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tougaard, S.; Jablonski, A.; Institute of Physical Chemistry, Polish Academy of Sciences, ul. Kasprzaka 44/52, 01-224 Warsaw
2011-09-15
In the common formalism of AES, it is assumed that the in-depth distribution of ionizations is uniform. There are experimental indications that this assumption may not be true for certain primary electron energies and solids. The term ''excitation depth distribution function'' (EXDDF) has been introduced to describe the distribution of ionizations at energies used in AES. This function is conceptually equivalent to the Phi-rho-z function of electron microprobe analysis (EPMA). There are, however, experimental difficulties to determine this function in particular for energies below {approx} 10 keV. In the present paper, we investigate the possibility of determining the shape ofmore » the EXDDF from the background of inelastically scattered electrons on the low energy side of the Auger electron features in the electron energy spectra. The experimentally determined EXDDFs are compared with the EXDDFs determined from Monte Carlo simulations of electron trajectories in solids. It is found that this technique is useful for the experimental determination of the EXDDF function.« less
Improved Results for Route Planning in Stochastic Transportation Networks
NASA Technical Reports Server (NTRS)
Boyan, Justin; Mitzenmacher, Michael
2000-01-01
In the bus network problem, the goal is to generate a plan for getting from point X to point Y within a city using buses in the smallest expected time. Because bus arrival times are not determined by a fixed schedule but instead may be random. the problem requires more than standard shortest path techniques. In recent work, Datar and Ranade provide algorithms in the case where bus arrivals are assumed to be independent and exponentially distributed. We offer solutions to two important generalizations of the problem, answering open questions posed by Datar and Ranade. First, we provide a polynomial time algorithm for a much wider class of arrival distributions, namely those with increasing failure rate. This class includes not only exponential distributions but also uniform, normal, and gamma distributions. Second, in the case where bus arrival times are independent and geometric discrete random variable,. we provide an algorithm for transportation networks of buses and trains, where trains run according to a fixed schedule.
Applications of Bayesian Statistics to Problems in Gamma-Ray Bursts
NASA Technical Reports Server (NTRS)
Meegan, Charles A.
1997-01-01
This presentation will describe two applications of Bayesian statistics to Gamma Ray Bursts (GRBS). The first attempts to quantify the evidence for a cosmological versus galactic origin of GRBs using only the observations of the dipole and quadrupole moments of the angular distribution of bursts. The cosmological hypothesis predicts isotropy, while the galactic hypothesis is assumed to produce a uniform probability distribution over positive values for these moments. The observed isotropic distribution indicates that the Bayes factor for the cosmological hypothesis over the galactic hypothesis is about 300. Another application of Bayesian statistics is in the estimation of chance associations of optical counterparts with galaxies. The Bayesian approach is preferred to frequentist techniques here because the Bayesian approach easily accounts for galaxy mass distributions and because one can incorporate three disjoint hypotheses: (1) bursts come from galactic centers, (2) bursts come from galaxies in proportion to luminosity, and (3) bursts do not come from external galaxies. This technique was used in the analysis of the optical counterpart to GRB970228.
Khalaf, Majid; Brey, Richard R; Meldrum, Jeff
2013-01-01
A new leg voxel model in two different positions (straight and bent) has been developed for in vivo measurement calibration purposes. This voxel phantom is a representation of a human leg that may provide a substantial enhancement to Monte Carlo modeling because it more accurately models different geometric leg positions and the non-uniform distribution of Am throughout the leg bones instead of assuming a one-position geometry and a uniform distribution of radionuclides. This was accomplished by performing a radiochemical analysis on small sections of the leg bones from the U.S. Transuranium and Uranium Registries (USTUR) case 0846. USTUR case 0846 represents an individual who was repeatedly contaminated by Am via chronic inhalation. To construct the voxel model, high resolution (2 mm) computed tomography (CT) images of the USTUR case 0846 leg were obtained in different positions. Thirty-six (36) objects (universes) were segmented manually from the CT images using 3D-Doctor software. Bones were divided into 30 small sections with an assigned weight exactly equal to the weight of bone sections obtained from radiochemical analysis of the USTUR case 0846 leg. The segmented images were then converted into a boundary file, and the Human Monitoring Laboratory (HML) voxelizer was used to convert the boundary file into the leg voxel phantom. Excluding the surrounding air regions, the straight leg phantom consists of 592,023 voxels, while the bent leg consists of 337,567 voxels. The resulting leg voxel model is now ready for use as an MCNPX input file to simulate in vivo measurement of bone-seeking radionuclides.
NASA Astrophysics Data System (ADS)
Raghunathan, A. V.; Aluru, N. R.
2007-07-01
A self-consistent molecular dynamics (SCMD) formulation is presented for electric-field-mediated transport of water and ions through a nanochannel connected to reservoirs or baths. The SCMD formulation is compared with a uniform field MD approach, where the applied electric field is assumed to be uniform, for 2nm and 3.5nm wide nanochannels immersed in a 0.5M KCl solution. Reservoir ionic concentrations are maintained using the dual-control-volume grand canonical molecular dynamics technique. Simulation results with varying channel height indicate that the SCMD approach calculates the electrostatic potential in the simulation domain more accurately compared to the uniform field approach, with the deviation in results increasing with the channel height. The translocation times and ionic fluxes predicted by uniform field MD can be substantially different from those predicted by the SCMD approach. Our results also indicate that during a 2ns simulation time K+ ions can permeate through a 1nm channel when the applied electric field is computed self-consistently, while the permeation is not observed when the electric field is assumed to be uniform.
NASA Technical Reports Server (NTRS)
Dong, D.; Fang, P.; Bock, F.; Webb, F.; Prawirondirdjo, L.; Kedar, S.; Jamason, P.
2006-01-01
Spatial filtering is an effective way to improve the precision of coordinate time series for regional GPS networks by reducing so-called common mode errors, thereby providing better resolution for detecting weak or transient deformation signals. The commonly used approach to regional filtering assumes that the common mode error is spatially uniform, which is a good approximation for networks of hundreds of kilometers extent, but breaks down as the spatial extent increases. A more rigorous approach should remove the assumption of spatially uniform distribution and let the data themselves reveal the spatial distribution of the common mode error. The principal component analysis (PCA) and the Karhunen-Loeve expansion (KLE) both decompose network time series into a set of temporally varying modes and their spatial responses. Therefore they provide a mathematical framework to perform spatiotemporal filtering.We apply the combination of PCA and KLE to daily station coordinate time series of the Southern California Integrated GPS Network (SCIGN) for the period 2000 to 2004. We demonstrate that spatially and temporally correlated common mode errors are the dominant error source in daily GPS solutions. The spatial characteristics of the common mode errors are close to uniform for all east, north, and vertical components, which implies a very long wavelength source for the common mode errors, compared to the spatial extent of the GPS network in southern California. Furthermore, the common mode errors exhibit temporally nonrandom patterns.
Evaluation of an Ensemble Dispersion Calculation.
NASA Astrophysics Data System (ADS)
Draxler, Roland R.
2003-02-01
A Lagrangian transport and dispersion model was modified to generate multiple simulations from a single meteorological dataset. Each member of the simulation was computed by assuming a ±1-gridpoint shift in the horizontal direction and a ±250-m shift in the vertical direction of the particle position, with respect to the meteorological data. The configuration resulted in 27 ensemble members. Each member was assumed to have an equal probability. The model was tested by creating an ensemble of daily average air concentrations for 3 months at 75 measurement locations over the eastern half of the United States during the Across North America Tracer Experiment (ANATEX). Two generic graphical displays were developed to summarize the ensemble prediction and the resulting concentration probabilities for a specific event: a probability-exceed plot and a concentration-probability plot. Although a cumulative distribution of the ensemble probabilities compared favorably with the measurement data, the resulting distribution was not uniform. This result was attributed to release height sensitivity. The trajectory ensemble approach accounts for about 41%-47% of the variance in the measurement data. This residual uncertainty is caused by other model and data errors that are not included in the ensemble design.
Chang, Jenghwa
2017-06-01
To develop a statistical model that incorporates the treatment uncertainty from the rotational error of the single isocenter for multiple targets technique, and calculates the extra PTV (planning target volume) margin required to compensate for this error. The random vector for modeling the setup (S) error in the three-dimensional (3D) patient coordinate system was assumed to follow a 3D normal distribution with a zero mean, and standard deviations of σ x , σ y , σ z . It was further assumed that the rotation of clinical target volume (CTV) about the isocenter happens randomly and follows a three-dimensional (3D) independent normal distribution with a zero mean and a uniform standard deviation of σ δ . This rotation leads to a rotational random error (R), which also has a 3D independent normal distribution with a zero mean and a uniform standard deviation of σ R equal to the product of σδπ180 and dI⇔T, the distance between the isocenter and CTV. Both (S and R) random vectors were summed, normalized, and transformed to the spherical coordinates to derive the Chi distribution with three degrees of freedom for the radial coordinate of S+R. PTV margin was determined using the critical value of this distribution for a 0.05 significance level so that 95% of the time the treatment target would be covered by the prescription dose. The additional PTV margin required to compensate for the rotational error was calculated as a function of σ R and dI⇔T. The effect of the rotational error is more pronounced for treatments that require high accuracy/precision like stereotactic radiosurgery (SRS) or stereotactic body radiotherapy (SBRT). With a uniform 2-mm PTV margin (or σ x = σ y = σ z = 0.715 mm), a σ R = 0.328 mm will decrease the CTV coverage probability from 95.0% to 90.9%, or an additional 0.2-mm PTV margin is needed to prevent this loss of coverage. If we choose 0.2 mm as the threshold, any σ R > 0.328 mm will lead to an extra PTV margin that cannot be ignored, and the maximal σ δ that can be ignored is 0.45° (or 0.0079 rad ) for dI⇔T = 50 mm or 0.23° (or 0.004 rad ) for dI⇔T = 100 mm. The rotational error cannot be ignored for high-accuracy/-precision treatments like SRS/SBRT, particularly when the distance between the isocenter and target is large. © 2017 American Association of Physicists in Medicine.
Fattal, D R; Ben-Shaul, A
1994-01-01
A molecular, mean-field theory of chain packing statistics in aggregates of amphiphilic molecules is applied to calculate the conformational properties of the lipid chains comprising the hydrophobic cores of dipalmitoyl-phosphatidylcholine (DPPC), dioleoyl-phosphatidylcholine (DOPC), and palmitoyl-oleoyl-phosphatidylcholine (POPC) bilayers in their fluid state. The central quantity in this theory, the probability distribution of chain conformations, is evaluated by minimizing the free energy of the bilayer assuming only that the segment density within the hydrophobic region is uniform (liquidlike). Using this distribution we calculate chain conformational properties such as bond orientational order parameters and spatial distributions of the various chain segments. The lipid chains, both the saturated palmitoyl (-(CH2)14-CH3) and the unsaturated oleoyl (-(CH2)7-CH = CH-(CH2)7-CH3) chains are modeled using rotational isomeric state schemes. All possible chain conformations are enumerated and their statistical weights are determined by the self-consistency equations expressing the condition of uniform density. The hydrophobic core of the DPPC bilayer is treated as composed of single (palmitoyl) chain amphiphiles, i.e., the interactions between chains originating from the same lipid headgroup are assumed to be the same as those between chains belonging to different molecules. Similarly, the DOPC system is treated as a bilayer of oleoyl chains. The POPC bilayer is modeled as an equimolar mixture of palmitoyl and oleoyl chains. Bond orientational order parameter profiles, and segment spatial distributions are calculated for the three systems above, for several values of the bilayer thickness (or, equivalently, average area/headgroup) chosen, where possible, so as to allow for comparisons with available experimental data and/or molecular dynamics simulations. In most cases the agreement between the mean-field calculations, which are relatively easy to perform, and the experimental and simulation data is very good, supporting their use as an efficient tool for analyzing a variety of systems subject to varying conditions (e.g., bilayers of different compositions or thicknesses at different temperatures). PMID:7811955
Two-dimensional symmetry breaking of fluid density distribution in closed nanoslits.
Berim, Gersh O; Ruckenstein, Eli
2008-01-14
Stable and metastable fluid density distributions (FDDs) in a closed nanoslit between two identical parallel solid walls have been identified on the basis of a nonlocal canonical ensemble density functional theory. Similar to Monte Carlo simulations, periodicity of the FDD in one of the lateral (parallel to the walls surfaces) directions, denoted as the x direction, was assumed. In the other lateral direction, y direction, the FDD was considered uniform. It was found that depending on the average fluid density in the slit, both uniform as well as nonuniform FDDs in the x direction can occur. The uniform FDDs are either symmetric or asymmetric about the middle plane between walls; the latter FDD being the consequence of a symmetry breaking across the slit. The nonuniform FDDs in the x direction occur either in the form of a bump on a thin liquid film covering the walls or as a liquid bridge between those walls and provide symmetry breaking in the x direction. For small and large average densities, the stable state is uniform in the x direction and is symmetric about the middle plane between walls. In the intermediate range of the average density and depending on the length L(x) of the FDD period, the stable state can be represented either by a FDD, which is uniform in the x direction and asymmetric about the middle of the slit (small values of L(x)), or by a bump- and bridgelike FDD for intermediate and large values of L(x), respectively. These results are in agreement with the Monte Carlo simulations performed earlier by other authors. Because the free energy of the stable state decreases monotonically with increasing L(x), one can conclude that the real period is very large (infinite) and that for the values of the parameters employed, a single bridge of finite length over the entire slit is generated.
Montcalm, Claude [Livermore, CA; Folta, James Allen [Livermore, CA; Walton, Christopher Charles [Berkeley, CA
2003-12-23
A method and system for determining a source flux modulation recipe for achieving a selected thickness profile of a film to be deposited (e.g., with highly uniform or highly accurate custom graded thickness) over a flat or curved substrate (such as concave or convex optics) by exposing the substrate to a vapor deposition source operated with time-varying flux distribution as a function of time. Preferably, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. Preferably, the method includes the steps of measuring the source flux distribution (using a test piece held stationary while exposed to the source with the source operated at each of a number of different applied power levels), calculating a set of predicted film thickness profiles, each film thickness profile assuming the measured flux distribution and a different one of a set of source flux modulation recipes, and determining from the predicted film thickness profiles a source flux modulation recipe which is adequate to achieve a predetermined thickness profile. Aspects of the invention include a computer-implemented method employing a graphical user interface to facilitate convenient selection of an optimal or nearly optimal source flux modulation recipe to achieve a desired thickness profile on a substrate. The method enables precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.
Ensemble Forecasting of Coronal Mass Ejections Using the WSA-ENLIL with CONED Model
NASA Technical Reports Server (NTRS)
Emmons, D.; Acebal, A.; Pulkkinen, A.; Taktakishvili, A.; MacNeice, P.; Odstricil, D.
2013-01-01
The combination of the Wang-Sheeley-Arge (WSA) coronal model, ENLIL heliospherical model version 2.7, and CONED Model version 1.3 (WSA-ENLIL with CONED Model) was employed to form ensemble forecasts for 15 halo coronal mass ejections (halo CMEs). The input parameter distributions were formed from 100 sets of CME cone parameters derived from the CONED Model. The CONED Model used image processing along with the bootstrap approach to automatically calculate cone parameter distributions from SOHO/LASCO imagery based on techniques described by Pulkkinen et al. (2010). The input parameter distributions were used as input to WSA-ENLIL to calculate the temporal evolution of the CMEs, which were analyzed to determine the propagation times to the L1 Lagrangian point and the maximum Kp indices due to the impact of the CMEs on the Earth's magnetosphere. The Newell et al. (2007) Kp index formula was employed to calculate the maximum Kp indices based on the predicted solar wind parameters near Earth assuming two magnetic field orientations: a completely southward magnetic field and a uniformly distributed clock-angle in the Newell et al. (2007) Kp index formula. The forecasts for 5 of the 15 events had accuracy such that the actual propagation time was within the ensemble average plus or minus one standard deviation. Using the completely southward magnetic field assumption, 10 of the 15 events contained the actual maximum Kp index within the range of the ensemble forecast, compared to 9 of the 15 events when using a uniformly distributed clock angle.
Naser, Mohamed A.; Patterson, Michael S.
2011-01-01
Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647
NASA Astrophysics Data System (ADS)
Hopkins, M. A.; Allsopp, D. W. E.; Kappers, M. J.; Oliver, R. A.; Humphreys, C. J.
2017-12-01
The efficiency of light emitting diodes (LEDs) remains a topic of great contemporary interest due to their potential to reduce the amount of energy consumed in lighting. The current consensus is that electrons and holes distribute themselves through the emissive region by a drift-diffusion process which results in a highly non-uniform distribution of the light emission and can reduce efficiency. In this paper, the measured variations in the external quantum efficiency of a range of InGaN/GaN LEDs with different numbers of quantum wells (QWs) are shown to compare closely with the predictions of a revised ABC model, in which it is assumed that the electrically injected electrons and holes are uniformly distributed through the multi-quantum well (MQW) region, or nearly so, and hence carrier recombination occurs equally in all the quantum wells. The implications of the reported results are that drift-diffusion plays a far lesser role in cross-well carrier transport than previously thought; that the dominant cause of efficiency droop is intrinsic to the quantum wells and that reductions in the density of non-radiative recombination centers in the MQW would enable the use of more QWs and thereby reduce Auger losses by spreading carriers more evenly across a wider emissive region.
NASA Astrophysics Data System (ADS)
Sahmani, S.; Aghdam, M. M.
2017-12-01
Morphology and pore size plays an essential role in the mechanical properties as well as the associated biological capability of a porous structure made of biomaterials. The objective of the current study is to predict the Young's modulus and Poisson's ratio of nanoporous biomaterials including refined truncated cube cells based on a hyperbolic shear deformable beam model. Analytical relationships for the mechanical properties of nanoporous biomaterials are given as a function of the refined cell's dimensions. After that, the size dependency in the nonlinear bending behavior of micro/nano-beams made of such nanoporous biomaterials is analyzed using the nonlocal strain gradient elasticity theory. It is assumed that the micro/nano-beam has one movable end under axial compression in conjunction with a uniform distributed lateral load. The Galerkin method together with an improved perturbation technique is employed to propose explicit analytical expression for nonlocal strain gradient load-deflection curves of the micro/nano-beams made of nanoporous biomaterials subjected to uniform transverse distributed load. It is found that through increment of the pore size, the micro/nano-beam will undergo much more deflection corresponding to a specific distributed load due to the reduction in the stiffness of nanoporous biomaterial. This pattern is more prominent for lower value of applied axial compressive load at the free end of micro/nano-beam.
Particle circulation and solids transport in large bubbling fluidized beds. Progress report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Homsy, G.M.
1982-04-01
We have undertaken a theoretical study of the possibility of the formation of plumes or channeling when coal particles volatilize upon introduction to a fluidized bed, Fitzgerald (1980). We have completed the analysis of the basic state of uniform flow and are currently completing a stability analysis. We have modified the continuum equations of fluidization, Homsy et al. (1980), to include the source of gas due to volatilization, which we assume to be uniformly distributed spatially. Simplifying these equations and solving leads to the prediction of a basic state analogous to the state of uniform fluidization found when no sourcemore » is present within the medium. We are currently completing a stability analysis of this basic state which will give the critical volatilization rate above which the above simple basic state is unstable. Because of the experimental evidence of Jewett and Lawless (1981), who observed regularly spaced plume-like instabilities upon drying a bed of saturated silica gel, we are considering two-dimensional periodic disturbances. The analysis is similar to that given by Homsy, et al. (1980) and Medlin et al. (1974). We hope to determine the stability limits for this system shortly.« less
NASA Astrophysics Data System (ADS)
Kamali, M.; Shamsi, M.; Saidi, A. R.
2018-03-01
As a first endeavor, the effect of nonlinear elastic foundation on the postbuckling behavior of smart magneto-electro-elastic (MEE) composite nanotubes is investigated. The composite nanotube is affected by a non-uniform thermal environment. A typical MEE composite nanotube consists of microtubules (MTs) and carbon nanotubes (CNTs) with a MEE cylindrical nanoshell for smart control. It is assumed that the nanoscale layers of the system are coupled by a polymer matrix or filament network depending on the application. In addition to thermal loads, magneto-electro-mechanical loads are applied to the composite nanostructure. Length scale effects are taken into account using the nonlocal elasticity theory. The principle of virtual work and von Karman's relations are used to derive the nonlinear governing differential equations of MEE CNT-MT nanotubes. Using Galerkin's method, nonlinear critical buckling loads are determined. Various types of non-uniform temperature distribution in the radial direction are considered. Finally, the effects of various parameters such as the nonlinear constant of elastic medium, thermal loading factor and small scale coefficient on the postbuckling of MEE CNT-MT nanotubes are studied.
ERIC Educational Resources Information Center
Ngo, Duc Minh
2009-01-01
Current methodologies used for the inference of thin film stresses through curvatures are strictly restricted to stress and curvature states which are assumed to remain uniform over the entire film/substrate system. In this dissertation, we extend these methodologies to non-uniform stress and curvature states for the single layer of thin film or…
Stress and strain concentration at a circular hole in an infinite plate
NASA Technical Reports Server (NTRS)
Stowell, Elbridge Z
1950-01-01
The theory of elasticity shows that the maximum stress at a circular hole in an infinite plate in tension is three times the applied stress when the material remains elastic. The effect of plasticity of the material is to lower this ratio. This paper considers the theoretical problem of the stress distribution in an infinitely large sheet with a circular hole for the general case where the material may have any stress-strain curve. The plate is assumed to be under uniform tension at a large distance from the hole. The material is taken to be isotropic and incompressible. (author)
Theoretical model for plasmonic photothermal response of gold nanostructures solutions
NASA Astrophysics Data System (ADS)
Phan, Anh D.; Nga, Do T.; Viet, Nguyen A.
2018-03-01
Photothermal effects of gold core-shell nanoparticles and nanorods dispersed in water are theoretically investigated using the transient bioheat equation and the extended Mie theory. Properly calculating the absorption cross section is an extremely crucial milestone to determine the elevation of solution temperature. The nanostructures are assumed to be randomly and uniformly distributed in the solution. Compared to previous experiments, our theoretical temperature increase during laser light illumination provides, in various systems, both reasonable qualitative and quantitative agreement. This approach can be a highly reliable tool to predict photothermal effects in experimentally unexplored structures. We also validate our approach and discuss itslimitations.
Bujkiewicz, Sylwia; Riley, Richard D
2016-01-01
Multivariate random-effects meta-analysis allows the joint synthesis of correlated results from multiple studies, for example, for multiple outcomes or multiple treatment groups. In a Bayesian univariate meta-analysis of one endpoint, the importance of specifying a sensible prior distribution for the between-study variance is well understood. However, in multivariate meta-analysis, there is little guidance about the choice of prior distributions for the variances or, crucially, the between-study correlation, ρB; for the latter, researchers often use a Uniform(−1,1) distribution assuming it is vague. In this paper, an extensive simulation study and a real illustrative example is used to examine the impact of various (realistically) vague prior distributions for ρB and the between-study variances within a Bayesian bivariate random-effects meta-analysis of two correlated treatment effects. A range of diverse scenarios are considered, including complete and missing data, to examine the impact of the prior distributions on posterior results (for treatment effect and between-study correlation), amount of borrowing of strength, and joint predictive distributions of treatment effectiveness in new studies. Two key recommendations are identified to improve the robustness of multivariate meta-analysis results. First, the routine use of a Uniform(−1,1) prior distribution for ρB should be avoided, if possible, as it is not necessarily vague. Instead, researchers should identify a sensible prior distribution, for example, by restricting values to be positive or negative as indicated by prior knowledge. Second, it remains critical to use sensible (e.g. empirically based) prior distributions for the between-study variances, as an inappropriate choice can adversely impact the posterior distribution for ρB, which may then adversely affect inferences such as joint predictive probabilities. These recommendations are especially important with a small number of studies and missing data. PMID:26988929
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwid, M; Zhang, H
Purpose: The purpose of this study was to evaluate the dosimetric impact of beam energy to the IORT treatment of residual cancer cells with different cancer cell distributions after breast-conserving surgery. Methods: The three dimensional (3D) radiation doses of IORT using a 4-cm spherical applicator at the energy of 40 keV and 50 keV were separately calculated at different depths of the postsurgical tumor bed. The modified linear quadratic model (MLQ) was used to estimate the radiobiological response of the tumor cells assuming different radio-sensitivities and density distributions. The impact of radiation was evaluated for two types of breast cancermore » cell lines (α /β=10, and α /β =3.8) at 20 Gy dose prescribed at the applicator surface. Cancer cell distributions in the postsurgical tissue field were assumed to be a Gaussian with the standard deviations of 0.5, 1 and 2 mm respectively, namely the cancer cell infiltrations of 1.5, 3, and 6 mm respectively. The surface cancer cell percentage was assumed to be 0.01%, 0.1%, 1% and 10% separately. The equivalent uniform doses (EUD) for all the scenarios were calculated. Results: The EUDs were found to be dependent on the distributions of cancer cells, but independent of the cancer cell radio-sensitivities and the density at the surface. EUDs of 50 keV are 1% larger than that of 40 keV. For a prescription dose of 20 Gy, EUDs of 50 keV beam are 17.52, 16.21 and 13.14 Gy respectively for 0.5, 1.0 and 2.0 mm of the standard deviation of cancer cell Gaussian distributions. Conclusion: The impact by selected energies of IORT beams is very minimal. When energy is changed from 50 keV to 40 keV, the EUDs are almost the same for the same cancer cell distribution. 40 keV can be safely used as an alternative of 50 keV beam in IORT.« less
Azimuthal ULF Structure and Radial Transport of Charged Particles
NASA Astrophysics Data System (ADS)
Ali, A.; Elkington, S. R.
2015-12-01
The Van Allen radiation belts contain highly energetic particles which interact with a variety of plasma and MHD waves. Waves with frequencies in the ULF range are understood to play an important role in loss and acceleration of energetic particles. There is still much to be understood about the interaction between charged particles and ULF waves in the inner magnetosphere and how they influence particle diffusion. We investigate how ULF wave power distribution in azimuth affects radial diffusion of charged particles. Analytic treatments of the diffusion coefficients generally assume uniform distribution of power in azimuth but in situ measurements suggest otherwise. The power profiles obtained from in situ measurements will be used to conduct particle simulations to see how well do the simulated diffusion coefficients agree with diffusion coefficients estimated directly from in situ measurements. We also look at the ULF wave power distribution across different modes. In order to use in situ point measurements from spacecraft, it is typically assumed that all of the wave power exists in m=1 mode. How valid is this assumption? Do higher modes contain a major fraction of the total power? If yes, then under what conditions? One strategy is to use the obtained realistic azimuthal power profiles from in situ measurements (such as from the Van Allen Probes) to drive simulations and see how the power distributions across modes larger than one depends on parameters such as the level of geomagnetic activity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bezler, P.; Hartzman, M.; Reich, M.
1980-08-01
A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.
Quantifying Mixed Uncertainties in Cyber Attacker Payoffs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna
Representation and propagation of uncertainty in cyber attacker payoffs is a key aspect of security games. Past research has primarily focused on representing the defender’s beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and intervals. Within cyber-settings, continuous probability distributions may still be appropriate for addressing statistical (aleatory) uncertainties where the defender may assume that the attacker’s payoffs differ over time. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information aboutmore » the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as probability boxes with intervals. In this study, we explore the mathematical treatment of such mixed payoff uncertainties.« less
Study of the mechanical behavior of a 2-D carbon-carbon composite
NASA Technical Reports Server (NTRS)
Avery, W. B.; Herakovich, C. T.
1987-01-01
The out-of-plane fracture of a 2-D carbon-carbon composite was observed and characterized to gain an understanding of the factors influencing the stress distribution in such a laminate. Finite element analyses of a two-ply carbon-carbon composite under in-plane, out-of-plane, and thermal loading were performed. Under in-plane loading all components of stress were strong functions of geometry. Additionally, large thermal stresses were predicted. Out-of-plane tensile tests revealed that failure was interlaminar, and that cracks propagated along the fiber-matrix interface. An elasticity solution was utilized to analyze an orthotropic fiber in an isotropic matrix under uniform thermal load. The analysis reveals that the stress distributions in a transversely orthotropic fiber are radically different than those predicted assuming the fiber to be transversely isotropic.
Scavenging of radioactive soluble gases from inhomogeneous atmosphere by evaporating rain droplets.
Elperin, Tov; Fominykh, Andrew; Krasovitov, Boris
2015-05-01
We analyze effects of inhomogeneous concentration and temperature distributions in the atmosphere, rain droplet evaporation and radioactive decay of soluble gases on the rate of trace gas scavenging by rain. We employ a one-dimensional model of precipitation scavenging of radioactive soluble gaseous pollutants that is valid for small gradients and non-uniform initial altitudinal distributions of temperature and concentration in the atmosphere. We assume that conditions of equilibrium evaporation of rain droplets are fulfilled. It is demonstrated that transient altitudinal distribution of concentration under the influence of rain is determined by the linear wave equation that describes propagation of a scavenging wave front. The obtained equation is solved by the method of characteristics. Scavenging coefficients are calculated for wet removal of gaseous iodine-131 and tritiated water vapor (HTO) for the exponential initial distribution of trace gases concentration in the atmosphere and linear temperature distribution. Theoretical predictions of the dependence of the magnitude of the scavenging coefficient on rain intensity for tritiated water vapor are in good agreement with the available atmospheric measurements. Copyright © 2015 Elsevier Ltd. All rights reserved.
Modeling and analysis of LWIR signature variability associated with 3D and BRDF effects
NASA Astrophysics Data System (ADS)
Adler-Golden, Steven; Less, David; Jin, Xuemin; Rynes, Peter
2016-05-01
Algorithms for retrieval of surface reflectance, emissivity or temperature from a spectral image almost always assume uniform illumination across the scene and horizontal surfaces with Lambertian reflectance. When these algorithms are used to process real 3-D scenes, the retrieved "apparent" values contain the strong, spatially dependent variations in illumination as well as surface bidirectional reflectance distribution function (BRDF) effects. This is especially problematic with horizontal or near-horizontal viewing, where many observed surfaces are vertical, and where horizontal surfaces can show strong specularity. The goals of this study are to characterize long-wavelength infrared (LWIR) signature variability in a HSI 3-D scene and develop practical methods for estimating the true surface values. We take advantage of synthetic near-horizontal imagery generated with the high-fidelity MultiService Electro-optic Signature (MuSES) model, and compare retrievals of temperature and directional-hemispherical reflectance using standard sky downwelling illumination and MuSES-based non-uniform environmental illumination.
NASA Astrophysics Data System (ADS)
Voronkov, V. V.; Falster, R.; Kim, TaeHyeong; Park, SoonSung; Torack, T.
2013-07-01
Silicon wafers, coated with a silicon nitride layer and subjected to high temperature Rapid Thermal Annealing (RTA) in Ar, show—upon a subsequent two-step precipitation anneal cycle (such as 800 °C + 1000 °C)—peculiar depth profiles of oxygen precipitate densities. Some profiles are sharply peaked near the wafer surface, sometimes with a zero bulk density. Other profiles are uniform in depth. The maximum density is always the same. These profiles are well reproduced by simulations assuming that precipitation starts from a uniformly distributed small oxide plates originated from RTA step and composed of oxygen atoms and vacancies ("VO2 plates"). During the first step of the precipitation anneal, an oxide layer propagates around this core plate by a process of oxygen attachment, meaning that an oxygen-only ring-shaped plate emerges around the original plate. These rings, depending on their size, then either dissolve or grow during the second part of the anneal leading to a rich variety of density profiles.
The Statistical Drake Equation
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2010-12-01
We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.
Stability of Electrons in the Virtual Cathode Region of an IEC
NASA Astrophysics Data System (ADS)
Kim, Hyng-Jin; Miley, George; Momota, Hiromu
2003-04-01
In the Inertial Electrostatic Confinement (IEC) device, electrons are confined inside a virtual anode that in turn confines ions. Prior stability studies [1, 2] have considered systems in which one species is electrostatically confined by the other, and either or both species are out of local thermal equilibrium. In the present research, electron stability in the virtual cathode region of an ion injected IEC is being studied. The ion density in an IEC is non-uniform due to the radial electrostatic potential, and increases toward the center region. The potential near the virtual cathode is assumed to have a parabolic shape and is determined assuming that the net space charge density is constant in that region. The corresponding ion distribution function is assumed to have the form f = C [sigma] (H W) /L^0.5 and the electron response is taken to be diabatic. Then using a variational principle after linearizing the hydrodynamic equations, stability properties of the electron layer are determined. Results will be presented as a function of injected ion/electron current ratios. 1. L. Chacon and D. C. Barnes, Phys. Plasma 7, 4774 (2000). 2. D. C. Barnes, L. Chacon, and J. M. Finn, Phys. Plasmas 9, 4448 (2002).
Yager, Richard M.; Southworth, Scott C.; Voss, Clifford I.
2008-01-01
Ground-water flow was simulated using variable-direction anisotropy in hydraulic conductivity to represent the folded, fractured sedimentary rocks that underlie the Shenandoah Valley in Virginia and West Virginia. The anisotropy is a consequence of the orientations of fractures that provide preferential flow paths through the rock, such that the direction of maximum hydraulic conductivity is oriented within bedding planes, which generally strike N30 deg E; the direction of minimum hydraulic conductivity is perpendicular to the bedding. The finite-element model SUTRA was used to specify variable directions of the hydraulic-conductivity tensor in order to represent changes in the strike and dip of the bedding throughout the valley. The folded rocks in the valley are collectively referred to as the Massanutten synclinorium, which contains about a 5-km thick section of clastic and carbonate rocks. For the model, the bedrock was divided into four units: a 300-m thick top unit with 10 equally spaced layers through which most ground water is assumed to flow, and three lower units each containing 5 layers of increasing thickness that correspond to the three major rock units in the valley: clastic, carbonate and metamorphic rocks. A separate zone in the carbonate rocks that is overlain by colluvial gravel - called the western-toe carbonate unit - was also distinguished. Hydraulic-conductivity values were estimated through model calibration for each of the four rock units, using data from 354 wells and 23 streamflow-gaging stations. Conductivity tensors for metamorphic and western-toe carbonate rocks were assumed to be isotropic, while conductivity tensors for carbonate and clastic rocks were assumed to be anisotropic. The directions of the conductivity tensor for carbonate and clastic rocks were interpolated for each mesh element from a stack of 'form surfaces' that provided a three-dimensional representation of bedrock structure. Model simulations were run with (1) variable strike and dip, in which conductivity tensors were aligned with the strike and dip of the bedding, and (2) uniform strike in which conductivity tensors were assumed to be horizontally isotropic with the maximum conductivity direction parallel to the N30 deg E axis of the valley and the minimum conductivity direction perpendicular to the horizontal plane. Simulated flow penetrated deeper into the aquifer system with the uniform-strike tensor than with the variable-strike-and-dip tensor. Sensitivity analyses suggest that additional information on recharge rates would increase confidence in the estimated parameter values. Two applications of the model were conducted - the first, to determine depth of recent ground-water flow by simulating the distribution of ground-water ages, showed that most shallow ground water is less than 10 years old. Ground-water age distributions computed by variable-strike-and-dip and uniform-strike models were similar, but differed beneath Massanutten Mountain in the center of the valley. The variable-strike-and-dip model simulated flow from west to east parallel to the bedding of the carbonate rocks beneath Massanutten Mountain, while the uniform-strike model, in which flow was largely controlled by topography, simulated this same area as an east-west ground-water divide. The second application, which delineated capture zones for selected well fields in the valley, showed that capture zones delineated with both models were similar in plan view, but differed in vertical extent. Capture zones simulated by the variable-strike-and-dip model extended downdip with the bedding of carbonate rock and were relatively shallow, while those simulated by the uniform-strike model extended to the bottom of the flow system, which is unrealistic. These results suggest that simulations of ground-water flow through folded fractured rock can be constructed using SUTRA to represent variable orientations of the hydraulic-conductivity tensor and produce a
Odd nitrogen production by meteoroids
NASA Technical Reports Server (NTRS)
Park, C.; Menees, G. P.
1978-01-01
The process by which odd nitrogen species (atomic nitrogen and nitric oxide) are formed during atmospheric entry of meteoroids is analyzed theoretically. An ablating meteoroid is assumed to be a point source of mass with a continuum regime evolving in its wake. The amounts of odd nitrogen species, produced by high-temperature reactions of air in the continuum wake, are calculated by numerical integration of chemical rate equations. Flow properties are assumed to be uniform across the wake, and 29 reactions involving five neutral species and five singly ionized species are considered, as well as vibrational and electron temperature nonequilibrium phenomena. The results, when they are summed over the observed mass, velocity, and entry-angle distribution of meteoroids, provide odd-nitrogen-species annual global production rates as functions of altitude. The peak production of nitric oxide is found to occur at an altitude of about 85 km; atomic nitrogen production peaks at about 95 km. The total annual rate for nitric oxide is 40 million kg; for atomic nitrogen it is 170 million kg.
Aspects of silicon bulk lifetimes
NASA Technical Reports Server (NTRS)
Landsberg, P. T.
1985-01-01
The best lifetimes attained for bulk crytalline silicon as a function of doping concentrations are analyzed. It is assumed that the dopants which set the Fermi level do not contribute to the recombination traffic which is due to the unknown defect. This defect is assumed to have two charge states: neutral and negative, the neutral defect concentration is frozen-in at some temperature T sub f. The higher doping concentrations should include the band-band Auger effect by using a generalization of the Shockley-Read-Hall (SRH) mechanism. The generalization of the SRH mechanism is discussed. This formulation gives a straightforward procedure for incorporating both band-band and band-trap Auger effects in the SRH procedure. Two related questions arise in this context: (1) it may sometimes be useful to write the steady-state occupation probability of the traps implied by SRH procedure in a form which approximates to the Fermi-Dirac distribution; and (2) the effect on the SRH mechanism of spreading N sub t levels at one energy uniformly over a range of energies is discussed.
Electrokinetic flow in a capillary with a charge-regulating surface polymer layer.
Keh, Huan J; Ding, Jau M
2003-07-15
An analytical study of the steady electrokinetic flow in a long uniform capillary tube or slit is presented. The inside wall of the capillary is covered by a layer of adsorbed or covalently bound charge-regulating polymer in equilibrium with the ambient electrolyte solution. In this solvent-permeable and ion-penetrable surface polyelectrolyte layer, ionogenic functional groups and frictional segments are assumed to distribute at uniform densities. The electrical potential and space charge density distributions in the cross section of the capillary are obtained by solving the linearized Poisson-Boltzmann equation. The fluid velocity profile due to the application of an electric field and a pressure gradient through the capillary is obtained from the analytical solution of a modified Navier-Stokes/Brinkman equation. Explicit formulas for the electroosmotic velocity, the average fluid velocity and electric current density on the cross section, and the streaming potential in the capillary are also derived. The results demonstrate that the direction of the electroosmotic flow and the magnitudes of the fluid velocity and electric current density are dominated by the fixed charge density inside the surface polymer layer, which is determined by the regulation characteristics such as the dissociation equilibrium constants of the ionogenic functional groups in the surface layer and the concentration of the potential-determining ions in the bulk solution.
NASA Astrophysics Data System (ADS)
Gori-Giorgi, Paola; Ziesche, Paul
2002-12-01
The momentum distribution of the unpolarized uniform electron gas in its Fermi-liquid regime, n(k,rs), with the momenta k measured in units of the Fermi wave number kF and with the density parameter rs, is constructed with the help of the convex Kulik function G(x). It is assumed that n(0,rs),n(1±,rs), the on-top pair density g(0,rs), and the kinetic energy t(rs) are known (respectively, from accurate calculations for rs=1,…,5, from the solution of the Overhauser model, and from quantum Monte Carlo calculations via the virial theorem). Information from the high- and the low-density limit, corresponding to the random-phase approximation and to the Wigner crystal limit, is used. The result is an accurate parametrization of n(k,rs), which fulfills most of the known exact constraints. It is in agreement with the effective-potential calculations of Takada and Yasuhara [Phys. Rev. B 44, 7879 (1991)], is compatible with quantum Monte Carlo data, and is valid in the density range rs≲12. The corresponding cumulant expansions of the pair density and of the static structure factor are discussed, and some exact limits are derived.
High-energy Electron Scattering and the Charge Distributions of Selected Nuclei
DOE R&D Accomplishments Database
Hahn, B.; Ravenhall, D. G.; Hofstadter, R.
1955-10-01
Experimental results are presented of electron scattering by Ca, V, Co, In, Sb, Hf, Ta, W, Au, Bi, Th, and U, at 183 Mev and (for some of the elements) at 153 Mev. For those nuclei for which asphericity and inelastic scattering are absent or unimportant, i.e., Ca, V, Co, In, Sb, Au, and Bi, a partial wave analysis of the Dirac equation has been performed in which the nuclei are represented by static, spherically symmetric charge distributions. Smoothed uniform charge distributions have been assumed; these are characterized by a constant charge density in the central region of the nucleus, with a smoothed-our surface. Essentially two parameters can be determined, related to the radium and to the surface thickness. An examination of the Au experiments show that the functional forms of the surface are not important, and that the charge density in the central regions is probably fairly flat, although it cannot be determined very accurately.
Wang, Pei-Ling; Engelhart, Simon E.; Wang, Kelin; Hawkes, Andrea D.; Horton, Benjamin P.; Nelson, Alan R.; Witter, Robert C.
2013-01-01
Past earthquake rupture models used to explain paleoseismic estimates of coastal subsidence during the great A.D. 1700 Cascadia earthquake have assumed a uniform slip distribution along the megathrust. Here we infer heterogeneous slip for the Cascadia margin in A.D. 1700 that is analogous to slip distributions during instrumentally recorded great subduction earthquakes worldwide. The assumption of uniform distribution in previous rupture models was due partly to the large uncertainties of then available paleoseismic data used to constrain the models. In this work, we use more precise estimates of subsidence in 1700 from detailed tidal microfossil studies. We develop a 3-D elastic dislocation model that allows the slip to vary both along strike and in the dip direction. Despite uncertainties in the updip and downdip slip extensions, the more precise subsidence estimates are best explained by a model with along-strike slip heterogeneity, with multiple patches of high-moment release separated by areas of low-moment release. For example, in A.D. 1700, there was very little slip near Alsea Bay, Oregon (~44.4°N), an area that coincides with a segment boundary previously suggested on the basis of gravity anomalies. A probable subducting seamount in this area may be responsible for impeding rupture during great earthquakes. Our results highlight the need for more precise, high-quality estimates of subsidence or uplift during prehistoric earthquakes from the coasts of southern British Columbia, northern Washington (north of 47°N), southernmost Oregon, and northern California (south of 43°N), where slip distributions of prehistoric earthquakes are poorly constrained.
NASA Astrophysics Data System (ADS)
Tick, G. R.; Wei, S.; Sun, H.; Zhang, Y.
2016-12-01
Pore-scale heterogeneity, NAPL distribution, and sorption/desorption processes can significantly affect aqueous phase elution and mass flux in porous media systems. The application of a scale-independent fractional derivative model (tFADE) was used to simulate elution curves for a series of columns (5 cm, 7 cm, 15 cm, 25 cm, and 80 cm) homogeneously packed with 20/30-mesh sand and distributed with uniform saturations (7-24%) of NAPL phase trichloroethene (TCE). An additional set of columns (7 cm and 25 cm) were packed with a heterogeneous distribution of quartz sand upon which TCE was emplaced by imbibing the immiscible liquid, under stable displacement conditions, to simulate a spill-type process. The tFADE model was able to better represent experimental elution behavior for systems that exhibited extensive long-term concentration tailing requiring much less parameters compared to typical multi-rate mass transfer models (MRMT). However, the tFADE model was not able to effectively simulate the entire elution curve for such systems with short concentration tailing periods since it assumes a power-law distribution for the dissolution rate for TCE. Such limitations may be solved using the tempered fractional derivative model, which can capture the single-rate mass transfer process and therefore the short elution concentration tailing behavior. Numerical solution for the tempered fractional-derivative model in bounded domains however remains a challenge and therefore requires further study. However, the tFADE model shows excellent promise for understanding impacts on concentration elution behavior for systems in which physical heterogeneity, non-uniform NAPL distribution, and pronounced sorption-desorption effects dominate or are present.
NASA Astrophysics Data System (ADS)
Fukahata, Y.; Wright, T. J.
2006-12-01
We developed a method of geodetic data inversion for slip distribution on a fault with an unknown dip angle. When fault geometry is unknown, the problem of geodetic data inversion is non-linear. A common strategy for obtaining slip distribution is to first determine the fault geometry by minimizing the square misfit under the assumption of a uniform slip on a rectangular fault, and then apply the usual linear inversion technique to estimate a slip distribution on the determined fault. It is not guaranteed, however, that the fault determined under the assumption of a uniform slip gives the best fault geometry for a spatially variable slip distribution. In addition, in obtaining a uniform slip fault model, we have to simultaneously determine the values of the nine mutually dependent parameters, which is a highly non-linear, complicated process. Although the inverse problem is non-linear for cases with unknown fault geometries, the non-linearity of the problems is actually weak, when we can assume the fault surface to be flat. In particular, when a clear fault trace is observed on the EarthOs surface after an earthquake, we can precisely estimate the strike and the location of the fault. In this case only the dip angle has large ambiguity. In geodetic data inversion we usually need to introduce smoothness constraints in order to compromise reciprocal requirements for model resolution and estimation errors in a natural way. Strictly speaking, the inverse problem with smoothness constraints is also non-linear, even if the fault geometry is known. The non-linearity has been dissolved by introducing AkaikeOs Bayesian Information Criterion (ABIC), with which the optimal value of the relative weight of observed data to smoothness constraints is objectively determined. In this study, using ABIC in determining the optimal dip angle, we dissolved the non-linearity of the inverse problem. We applied the method to the InSAR data of the 1995 Dinar, Turkey earthquake and obtained a much shallower dip angle than before.
NASA Astrophysics Data System (ADS)
Shabeeb, Ahmeed; Taha, Uday; dragonetti, giovanna; Lamaddalena, Nicola; Coppola, Antonio
2016-04-01
In order to evaluate how efficiently and uniformly drip irrigation systems can deliver water to emitters distributed around a field, we need some methods for measuring/calculating water application efficiency (WAE) and emission uniformity (EU). In general, the calculation of the WAE and of other efficiency indices requires the measurement of the water stored in the root zone. Measuring water storage in soils allows directly saying how much water a given location of the field retains having received a given amount of irrigation water. And yet, due to the difficulties of measuring water content variability under an irrigation system at field scale, it is quite common using EU as a proxy indicator of the irrigation performance. This implicitly means assuming that the uniformity of water application is immediately reflected in an uniformity of water stored in the root zone. In other words, that if a site receive more water it will store more water. Nevertheless, due to the heterogeneity of soil hydrological properties the same EU may correspond to very different distributions of water stored in the soil root zone. 1) In the case of isolated drippers, the storages measured in the soil root zone layer shortly after an irrigation event may be or not different from the water height applied at the surface depending on the vertical/horizontal development of the wetted bulbs. Specifically, in the case of dominant horizontal spreading the water storage is expected to reflect the distribution of water applied at the surface. To the contrary, in the case of relatively significant vertical spreading, deep percolation fluxes (fluxes leaving the root zone) may well induce water storages in the root zone significantly different from the water applied at the surface. 2) The drippers and laterals are close enough that the wetted bulbs below adjacent drippers may interact. In this case, lateral fluxes in the soil may well induce water storages in the root zone which may be significantly uncorrelated with the uniformity of the water applied at the surface. In both the cases, the size of lateral fluxes compared to the vertical ones throughout the rooting zone depends, besides the soil hydraulic properties, on the amount of water delivered to the soil. Larger water applications produce greater spreading, but in both the horizontal and vertical directions. Increased vertical spreading may be undesirable because water moving below the active root zone can result in wasted water, loss of nutrients, and groundwater pollution.
Impact of uniform electrode current distribution on ETF. [Engineering Test Facility MHD generator
NASA Technical Reports Server (NTRS)
Bents, D. J.
1982-01-01
A basic reason for the complexity and sheer volume of electrode consolidation hardware in the MHD ETF Powertrain system is the channel electrode current distribution, which is non-uniform. If the channel design is altered to provide uniform electrode current distribution, the amount of hardware required decreases considerably, but at the possible expense of degraded channel performance. This paper explains the design impacts on the ETF electrode consolidation network associated with uniform channel electrode current distribution, and presents the alternate consolidation designs which occur. They are compared to the baseline (non-uniform current) design with respect to performance, and hardware requirements. A rational basis is presented for comparing the requirements for the different designs and the savings that result from uniform current distribution. Performance and cost impacts upon the combined cycle plant are discussed.
Mu, Guangyu; Liu, Ying; Wang, Limin
2015-01-01
The spatial pooling method such as spatial pyramid matching (SPM) is very crucial in the bag of features model used in image classification. SPM partitions the image into a set of regular grids and assumes that the spatial layout of all visual words obey the uniform distribution over these regular grids. However, in practice, we consider that different visual words should obey different spatial layout distributions. To improve SPM, we develop a novel spatial pooling method, namely spatial distribution pooling (SDP). The proposed SDP method uses an extension model of Gauss mixture model to estimate the spatial layout distributions of the visual vocabulary. For each visual word type, SDP can generate a set of flexible grids rather than the regular grids from the traditional SPM. Furthermore, we can compute the grid weights for visual word tokens according to their spatial coordinates. The experimental results demonstrate that SDP outperforms the traditional spatial pooling methods, and is competitive with the state-of-the-art classification accuracy on several challenging image datasets.
Robustness of optimal random searches in fragmented environments
NASA Astrophysics Data System (ADS)
Wosniack, M. E.; Santos, M. C.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.
2015-05-01
The random search problem is a challenging and interdisciplinary topic of research in statistical physics. Realistic searches usually take place in nonuniform heterogeneous distributions of targets, e.g., patchy environments and fragmented habitats in ecological systems. Here we present a comprehensive numerical study of search efficiency in arbitrarily fragmented landscapes with unlimited visits to targets that can only be found within patches. We assume a random walker selecting uniformly distributed turning angles and step lengths from an inverse power-law tailed distribution with exponent μ . Our main finding is that for a large class of fragmented environments the optimal strategy corresponds approximately to the same value μopt≈2 . Moreover, this exponent is indistinguishable from the well-known exact optimal value μopt=2 for the low-density limit of homogeneously distributed revisitable targets. Surprisingly, the best search strategies do not depend (or depend only weakly) on the specific details of the fragmentation. Finally, we discuss the mechanisms behind this observed robustness and comment on the relevance of our results to both the random search theory in general, as well as specifically to the foraging problem in the biological context.
The Impact of Uncertain Physical Parameters on HVAC Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Elizondo, Marcelo A.; Lu, Shuai
HVAC units are currently one of the major resources providing demand response (DR) in residential buildings. Models of HVAC with DR function can improve understanding of its impact on power system operations and facilitate the deployment of DR technologies. This paper investigates the importance of various physical parameters and their distributions to the HVAC response to DR signals, which is a key step to the construction of HVAC models for a population of units with insufficient data. These parameters include the size of floors, insulation efficiency, the amount of solid mass in the house, and efficiency of the HVAC units.more » These parameters are usually assumed to follow Gaussian or Uniform distributions. We study the effect of uncertainty in the chosen parameter distributions on the aggregate HVAC response to DR signals, during transient phase and in steady state. We use a quasi-Monte Carlo sampling method with linear regression and Prony analysis to evaluate sensitivity of DR output to the uncertainty in the distribution parameters. The significance ranking on the uncertainty sources is given for future guidance in the modeling of HVAC demand response.« less
Stability derivatives for bodies of revolution at subsonic speeds
NASA Technical Reports Server (NTRS)
Liu, D. D.; Platzer, M. F.; Ruo, S. Y.
1976-01-01
The paper considers a rigid pointed body of revolution in a steady uniform subsonic flow. The body performs harmonic small-amplitude pitching oscillations around its zero angle of attack position. The body is assumed to be smooth and sufficiently slender so that the small perturbation concept can be applied. The basis of the method used, following Revell (1960), is the relation of a body-fixed perturbation potential to the general velocity potential. Normal force distributions as well as total force and moment coefficients are calculated for parabolic spindles and the numerical results show good agreement between Revell's second-order slender body theory and the present theory for the static stability derivatives of the parabolic spindles.
Precise mass determination of single cell with cantilever-based microbiosensor system.
Łabędź, Bogdan; Wańczyk, Aleksandra; Rajfur, Zenon
2017-01-01
Having determined the mass of a single cell of brewer yeast Saccharomyces cerevisiae by means of a microcantilever-based biosensor Cantisens CSR-801 (Concentris, Basel, Switzerland), it was found that its dry mass is 47,65 ± 1,05 pg. Found to be crucial in this mass determination was the cell position along the length of the cantilever. Moreover, calculations including cells positions on the cantilever provide a threefold better degree of accuracy than those which assume uniform mass distribution. We have also examined the influence of storage time on the single cell mass. Our results show that after 6 months there is an increase in the average mass of a single yeast cell.
NASA Astrophysics Data System (ADS)
Sugiyanto, S.; Hardyanto, W.; Marwoto, P.
2018-03-01
Transport phenomena are found in many problems in many engineering and industrial sectors. We analyzed a Lattice Boltzmann method with Two-Relaxation Time (LTRT) collision operators for simulation of pollutant moving through the medium as a two-dimensional (2D) transport problem in a rectangular region model. This model consists of a 2D rectangular region with 54 length (x), 27 width (y), and it has isotropic homogeneous medium. Initially, the concentration is zero and is distributed evenly throughout the region of interest. A concentration of 1 is maintained at 9 < y < 18, whereas the concentration of zero is maintained at 0 < y < 9 and 18 < y < 27. A specific discharge (Darcy velocity) of 1.006 is assumed. A diffusion coefficient of 0.8333 is distributed uniformly with a uniform porosity of 0.35. A computer program is written in MATLAB to compute the concentration of pollutant at any specified place and time. The program shows that LTRT solution with quadratic equilibrium distribution functions (EDFs) and relaxation time τa=1.0 are in good agreement result with other numerical solutions methods such as 3DLEWASTE (Hybrid Three-dimensional Lagrangian-Eulerian Finite Element Model of Waste Transport Through Saturated-Unsaturated Media) obtained by Yeh and 3DFEMWATER-LHS (Three-dimensional Finite Element Model of Water Flow Through Saturated-Unsaturated Media with Latin Hypercube Sampling) obtained by Hardyanto.
Novel Physical Model for DC Partial Discharge in Polymeric Insulators
NASA Astrophysics Data System (ADS)
Andersen, Allen; Dennison, J. R.
The physics of DC partial discharge (DCPD) continues to pose a challenge to researchers. We present a new physically-motivated model of DCPD in amorphous polymers based on our dual-defect model of dielectric breakdown. The dual-defect model is an extension of standard static mean field theories, such as the Crine model, that describe avalanche breakdown of charge carriers trapped on uniformly distributed defect sites. It assumes the presence of both high-energy chemical defects and low-energy thermally-recoverable physical defects. We present our measurements of breakdown and DCPD for several common polymeric materials in the context of this model. Improved understanding of DCPD and how it relates to eventual dielectric breakdown is critical to the fields of spacecraft charging, high voltage DC power distribution, high density capacitors, and microelectronics. This work was supported by a NASA Space Technology Research Fellowship.
A Novel Space Partitioning Algorithm to Improve Current Practices in Facility Placement
Jimenez, Tamara; Mikler, Armin R; Tiwari, Chetan
2012-01-01
In the presence of naturally occurring and man-made public health threats, the feasibility of regional bio-emergency contingency plans plays a crucial role in the mitigation of such emergencies. While the analysis of in-place response scenarios provides a measure of quality for a given plan, it involves human judgment to identify improvements in plans that are otherwise likely to fail. Since resource constraints and government mandates limit the availability of service provided in case of an emergency, computational techniques can determine optimal locations for providing emergency response assuming that the uniform distribution of demand across homogeneous resources will yield and optimal service outcome. This paper presents an algorithm that recursively partitions the geographic space into sub-regions while equally distributing the population across the partitions. For this method, we have proven the existence of an upper bound on the deviation from the optimal population size for sub-regions. PMID:23853502
NASA Astrophysics Data System (ADS)
Russell, Matthew J.; Jensen, Oliver E.; Galla, Tobias
2016-10-01
Motivated by uncertainty quantification in natural transport systems, we investigate an individual-based transport process involving particles undergoing a random walk along a line of point sinks whose strengths are themselves independent random variables. We assume particles are removed from the system via first-order kinetics. We analyze the system using a hierarchy of approaches when the sinks are sparsely distributed, including a stochastic homogenization approximation that yields explicit predictions for the extrinsic disorder in the stationary state due to sink strength fluctuations. The extrinsic noise induces long-range spatial correlations in the particle concentration, unlike fluctuations due to the intrinsic noise alone. Additionally, the mean concentration profile, averaged over both intrinsic and extrinsic noise, is elevated compared with the corresponding profile from a uniform sink distribution, showing that the classical homogenization approximation can be a biased estimator of the true mean.
Measurement of Circumstellar Disk Sizes in the Upper Scorpius OB Association with ALMA
NASA Astrophysics Data System (ADS)
Barenfeld, Scott A.; Carpenter, John M.; Sargent, Anneila I.; Isella, Andrea; Ricci, Luca
2017-12-01
We present detailed modeling of the spatial distributions of gas and dust in 57 circumstellar disks in the Upper Scorpius OB Association observed with ALMA at submillimeter wavelengths. We fit power-law models to the dust surface density and CO J = 3–2 surface brightness to measure the radial extent of dust and gas in these disks. We found that these disks are extremely compact: the 25 highest signal-to-noise disks have a median dust outer radius of 21 au, assuming an {R}-1 dust surface density profile. Our lack of CO detections in the majority of our sample is consistent with these small disk sizes assuming the dust and CO share the same spatial distribution. Of seven disks in our sample with well-constrained dust and CO radii, four appear to be more extended in CO, although this may simply be due to the higher optical depth of the CO. Comparison of the Upper Sco results with recent analyses of disks in Taurus, Ophiuchus, and Lupus suggests that the dust disks in Upper Sco may be approximately three times smaller in size than their younger counterparts, although we caution that a more uniform analysis of the data across all regions is needed. We discuss the implications of these results for disk evolution.
Limited potential for adaptation to climate change in a broadly distributed marine crustacean.
Kelly, Morgan W; Sanford, Eric; Grosberg, Richard K
2012-01-22
The extent to which acclimation and genetic adaptation might buffer natural populations against climate change is largely unknown. Most models predicting biological responses to environmental change assume that species' climatic envelopes are homogeneous both in space and time. Although recent discussions have questioned this assumption, few empirical studies have characterized intraspecific patterns of genetic variation in traits directly related to environmental tolerance limits. We test the extent of such variation in the broadly distributed tidepool copepod Tigriopus californicus using laboratory rearing and selection experiments to quantify thermal tolerance and scope for adaptation in eight populations spanning more than 17° of latitude. Tigriopus californicus exhibit striking local adaptation to temperature, with less than 1 per cent of the total quantitative variance for thermal tolerance partitioned within populations. Moreover, heat-tolerant phenotypes observed in low-latitude populations cannot be achieved in high-latitude populations, either through acclimation or 10 generations of strong selection. Finally, in four populations there was no increase in thermal tolerance between generations 5 and 10 of selection, suggesting that standing variation had already been depleted. Thus, plasticity and adaptation appear to have limited capacity to buffer these isolated populations against further increases in temperature. Our results suggest that models assuming a uniform climatic envelope may greatly underestimate extinction risk in species with strong local adaptation.
NASA Astrophysics Data System (ADS)
Xia, Yongfang; Shi, Junrui; Xu, Youning; Ma, Rui
2018-03-01
Filtration combustion (FC) is one style of porous media combustion with inert matrix, in which the combustion wave front propagates, only downstream or reciprocally. In this paper, we investigate the FC flame front inclinational instability of lean methane/air mixtures flowing through a packed bed as a combustion wave front perturbation of the initial preheating temperature non-uniformity is assumed. The predicted results show that the growth rate of the flame front inclinational angle is proportional to the magnitude of the initial preheating temperature difference. Additionally, depending on gas inlet gas velocity and equivalence ratio, it is demonstrated that increase of gas inlet gas velocity accelerates the FC wave front deformation, and the inclinational instability evolves faster at lower equivalence ratio. The development of the flame front inclinational angle may be regarded as a two-staged evolution, which includes rapid increase, and approaching maximum value of inclinational angle due to the quasi-steady condition of the combustion system. The hydrodynamic and thermal mechanisms of the FC inclinational instability are analyzed. Consequently, the local propagation velocity of the FC wave front is non-uniform to result in the development of inclinational angle at the first stage of rapid increase.
Scafetta, Nicola
2011-12-01
Probability distributions of human displacements have been fit with exponentially truncated Lévy flights or fat tailed Pareto inverse power law probability distributions. Thus, people usually stay within a given location (for example, the city of residence), but with a non-vanishing frequency they visit nearby or far locations too. Herein, we show that an important empirical distribution of human displacements (range: from 1 to 1000 km) can be well fit by three consecutive Pareto distributions with simple integer exponents equal to 1, 2, and (>) 3. These three exponents correspond to three displacement range zones of about 1 km ≲Δr≲10 km, 10 km ≲Δr≲300 km, and 300 km ≲Δr≲1000 km, respectively. These three zones can be geographically and physically well determined as displacements within a city, visits to nearby cities that may occur within just one-day trips, and visit to far locations that may require multi-days trips. The incremental integer values of the three exponents can be easily explained with a three-scale mobility cost∕benefit model for human displacements based on simple geometrical constrains. Essentially, people would divide the space into three major regions (close, medium, and far distances) and would assume that the travel benefits are randomly∕uniformly distributed mostly only within specific urban-like areas. The three displacement distribution zones appear to be characterized by an integer (1, 2, or >3) inverse power exponent because of the specific number (1, 2, or >3) of cost mechanisms (each of which is proportional to the displacement length). The distributions in the first two zones would be associated to Pareto distributions with exponent β = 1 and β = 2 because of simple geometrical statistical considerations due to the a priori assumption that most benefits are searched in the urban area of the city of residence or in the urban area of specific nearby cities. We also show, by using independent records of human mobility, that the proposed model predicts the statistical properties of human mobility below 1 km ranges, where people just walk. In the latter case, the threshold between zone 1 and zone 2 may be around 100-200 m and, perhaps, may have been evolutionary determined by the natural human high resolution visual range, which characterizes an area of interest where the benefits are assumed to be randomly and uniformly distributed. This rich and suggestive interpretation of human mobility may characterize other complex random walk phenomena that may also be described by a N-piece fit Pareto distributions with increasing integer exponents. This study also suggests that distribution functions used to fit experimental probability distributions must be carefully chosen for not improperly obscuring the physics underlying a phenomenon.
NASA Technical Reports Server (NTRS)
Sivapalan, Murugesu; Wood, Eric F.; Beven, Keith J.
1993-01-01
One of the shortcomings of the original theory of the geomorphologic unit hydrograph (GUH) is that it assumes that runoff is generated uniformly from the entire catchment area. It is now recognized that in many catchments much of the runoff during storm events is produced on partial areas which usually form on narrow bands along the stream network. A storm response model that includes runoff generation on partial areas by both Hortonian and Dunne mechanisms was recently developed by the authors. In this paper a methodology for integrating this partial area runoff generation model with the GUH-based runoff routing model is presented; this leads to a generalized GUH. The generalized GUH and the storm response model are then used to estimate physically based flood frequency distributions. In most previous work the initial moisture state of the catchment had been assumed to be constant for all the storms. In this paper we relax this assumption and allow the initial moisture conditions to vary between storms. The resulting flood frequency distributions are cast in a scaled dimensionless framework where issues such as catchment scale and similarity can be conveniently addressed. A number of experiments are performed to study the sensitivity of the flood frequency response to some of the 'similarity' parameters identified in this formulation. The results indicate that one of the most important components of the derived flood frequency model relates to the specification of processes within the runoff generation model; specifically the inclusion of both saturation excess and Horton infiltration excess runoff production mechanisms. The dominance of these mechanisms over different return periods of the flood frequency distribution can significantly affect the distributional shape and confidence limits about the distribution. Comparisons with observed flood distributions seem to indicate that such mixed runoff production mechanisms influence flood distribution shape. The sensitivity analysis also indicated that the incorporation of basin and rainfall storm scale also greatly influences the distributional shape of the flood frequency curve.
Transient well flow in layered aquifer systems: the uniform well-face drawdown solution
NASA Astrophysics Data System (ADS)
Hemker, C. J.
1999-11-01
Previously a hybrid analytical-numerical solution for the general problem of computing transient well flow in vertically heterogeneous aquifers was proposed by the author. The radial component of flow was treated analytically, while the finite-difference technique was used for the vertical flow component only. In the present work the hybrid solution has been modified by replacing the previously assumed uniform well-face gradient (UWG) boundary condition in such a way that the drawdown remains uniform along the well screen. The resulting uniform well-face drawdown (UWD) solution also includes the effects of a finite diameter well, wellbore storage and a thin skin, while partial penetration and vertical heterogeneity are accommodated by the one-dimensional discretization. Solutions are proposed for well flow caused by constant, variable and slug discharges. The model was verified by comparing wellbore drawdowns and well-face flux distributions with published numerical solutions. Differences between UWG and UWD well flow will occur in all situations with vertical flow components near the well, which is demonstrated by considering: (1) partially penetrating wells in confined aquifers, (2) fully penetrating wells in unconfined aquifers with delayed response and (3) layered aquifers and leaky multiaquifer systems. The presented solution can be a powerful tool for solving many well-hydraulic problems, including well tests, flowmeter tests, slug tests and pumping tests. A computer program for the analysis of pumping tests, based on the hybrid analytical-numerical technique and UWG or UWD conditions, is available from the author.
Differences in evolutionary pressure acting within highly conserved ortholog groups.
Przytycka, Teresa M; Jothi, Raja; Aravind, L; Lipman, David J
2008-07-17
In highly conserved widely distributed ortholog groups, the main evolutionary force is assumed to be purifying selection that enforces sequence conservation, with most divergence occurring by accumulation of neutral substitutions. Using a set of ortholog groups from prokaryotes, with a single representative in each studied organism, we asked the question if this evolutionary pressure is acting similarly on different subgroups of orthologs defined as major lineages (e.g. Proteobacteria or Firmicutes). Using correlations in entropy measures as a proxy for evolutionary pressure, we observed two distinct behaviors within our ortholog collection. The first subset of ortholog groups, called here informational, consisted mostly of proteins associated with information processing (i.e. translation, transcription, DNA replication) and the second, the non-informational ortholog groups, mostly comprised of proteins involved in metabolic pathways. The evolutionary pressure acting on non-informational proteins is more uniform relative to their informational counterparts. The non-informational proteins show higher level of correlation between entropy profiles and more uniformity across subgroups. The low correlation of entropy profiles in the informational ortholog groups suggest that the evolutionary pressure acting on the informational ortholog groups is not uniform across different clades considered this study. This might suggest "fine-tuning" of informational proteins in each lineage leading to lineage-specific differences in selection. This, in turn, could make these proteins less exchangeable between lineages. In contrast, the uniformity of the selective pressure acting on the non-informational groups might allow the exchange of the genetic material via lateral gene transfer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ginn, Timothy R.; Weathers, Tess
Biogeochemical modeling using PHREEQC2 and a streamtube ensemble approach is utilized to understand a well-to-well subsurface treatment system at the Vadose Zone Research Park (VZRP) near Idaho Falls, Idaho. Treatment involves in situ microbially-mediated ureolysis to induce calcite precipitation for the immobilization of strontium-90. PHREEQC2 is utilized to model the kinetically-controlled ureolysis and consequent calcite precipitation. Reaction kinetics, equilibrium phases, and cation exchange are used within PHREEQC2 to track pH and levels of calcium, ammonium, urea, and calcite precipitation over time, within a series of one-dimensional advective-dispersive transport paths creating a streamtube ensemble representation of the well-to-well transport. An understandingmore » of the impact of physical heterogeneities within this radial flowfield is critical for remediation design; we address this via the streamtube approach: instead of depicting spatial extents of solutes in the subsurface we focus on their arrival distribution at the control well(s). Traditionally, each streamtube maintains uniform velocity; however in radial flow in homogeneous media, the velocity within any given streamtube is spatially-variable in a common way, being highest at the input and output wells and approaching a minimum at the midpoint between the wells. This idealized velocity variability is of significance in the case of ureolytically driven calcite precipitation. Streamtube velocity patterns for any particular configuration of injection and withdrawal wells are available as explicit calculations from potential theory, and also from particle tracking programs. To approximate the actual spatial distribution of velocity along streamtubes, we assume idealized radial non-uniform velocity associated with homogeneous media. This is implemented in PHREEQC2 via a non-uniform spatial discretization within each streamtube that honors both the streamtube’s travel time and the idealized “fast-slow-fast” pattern of non-uniform velocity along the streamline. Breakthrough curves produced by each simulation are weighted by the path-respective flux fractions (obtained by deconvolution of tracer tests conducted at the VZRP) to obtain the flux-average of flow contributions to the observation well.« less
Radar Doppler Processing with Nonuniform Sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin W.
2017-07-01
Conventional signal processing to estimate radar Doppler frequency often assumes uniform pulse/sample spacing. This is for the convenience of t he processing. More recent performance enhancements in processor capability allow optimally processing nonuniform pulse/sample spacing, thereby overcoming some of the baggage that attends uniform sampling, such as Doppler ambiguity and SNR losses due to sidelobe control measures.
Simulating the Dependence of Aspen on Redistributed Snow
NASA Astrophysics Data System (ADS)
Soderquist, B.; Kavanagh, K.; Link, T. E.; Seyfried, M. S.; Winstral, A. H.
2013-12-01
In mountainous regions across the western USA, the distribution of aspen (Populus tremuloides) is often directly related to heterogeneous soil moisture subsidies resulting from redistributed snow. With decades of climate and precipitation data across elevational and precipitation gradients, the Reynolds Creek Experimental Watershed (RCEW) in southwest Idaho provides a unique opportunity to study the relationship between aspen and redistributed snow. Within the RCEW, the total amount of precipitation has not changed in the past 50 years, but there are sharp declines in the percentage of the precipitation falling as snow. As shifts in the distribution of available moisture continue, future trends in aspen net primary productivity (NPP) remain uncertain. In order to assess the importance of snowdrift subsidies, NPP of three aspen stands was simulated at sites spanning elevational and precipitation gradients using the biogeochemical process model BIOME-BGC. At the aspen site experiencing the driest climate and lowest amount of precipitation from snow, approximately 400 mm of total precipitation was measured from November to March of 2008. However, peak measured snow water equivalent (SWE) held in drifts directly upslope of this stand was approximately 2100 mm, 5 times more moisture than the uniform winter precipitation layer initially assumed by BIOME-BGC. BIOME-BGC simulations in dry years forced by adjusted precipitation data resulted in NPP values approximately 30% higher than simulations assuming a uniform precipitation layer. Using BIOME-BGC and climate data from 1985-2011, the relationship between simulated NPP and measured basal area increments (BAI) improved after accounting for redistributed snow, indicating increased simulation representation. In addition to improved simulation capabilities, soil moisture data, diurnal branch water potential, and stomatal conductance observations at each site detail the use of soil moisture in the rooting zone and the onset of drought stress occurring in stands located along a precipitation phase gradient. These results further emphasize the importance of redistributed snow in heterogeneous landscapes along with the need to account for temporal shifts in water resource availability when assessing ecosystem vulnerability to climate change.
Impact of uniform electrode current distribution on ETF
NASA Technical Reports Server (NTRS)
Bents, D. J.
1982-01-01
The design impacts on the ETF electrode consolidation network associated with uniform channel electrode current distribution are examined and the alternate consolidation design which occur are presented compared to the baseline (non-uniform current) design with respect to performance, and hardware requirements. A rational basis is given for comparing the requirements for the different designs and the savings that result from uniform current distribution. Performance and cost impacts upon the combined cycle plant are discussed.
Bozorgi-Amiri, Ali; Tavakoli, Shayan; Mirzaeipour, Hossein; Rabbani, Masoud
2017-03-01
Health emergency medical service (HEMS) plays an important role in reducing injuries by providing advanced medical care in the shortest time and reducing the transfer time to advanced treatment centers. In the regions without ground relief coverage, it would be faster to transfer emergency patients to the hospital by a helicopter. In this paper, an integer nonlinear programming model is presented for the integrated locating of helicopter stations and helipads by considering uncertainty in demand points. We assume three transfer modes: (1) direct transfer by an ambulance, (2) transfer by an ambulance to a helicopter station and then to the hospital by a helicopter, (3) transfer by an ambulance to a predetermined point and then to the hospital by a helicopter. We also assume that demands occur in a square-shaped area, in which each side follows a uniform distribution. It is also assumed that demands in an area decrease errors in the distances between each two cities. The purpose of this model is to minimize the transfer time from demand points to the hospital by considering different modes. The proposed model is examined in terms of validity and applicability in Lorestan Province and a sensitivity analysis is also conducted on the total allocated budget. Copyright © 2016 Elsevier Inc. All rights reserved.
Size distributions and failure initiation of submarine and subaerial landslides
ten Brink, Uri S.; Barkan, R.; Andrews, B.D.; Chaytor, J.D.
2009-01-01
Landslides are often viewed together with other natural hazards, such as earthquakes and fires, as phenomena whose size distribution obeys an inverse power law. Inverse power law distributions are the result of additive avalanche processes, in which the final size cannot be predicted at the onset of the disturbance. Volume and area distributions of submarine landslides along the U.S. Atlantic continental slope follow a lognormal distribution and not an inverse power law. Using Monte Carlo simulations, we generated area distributions of submarine landslides that show a characteristic size and with few smaller and larger areas, which can be described well by a lognormal distribution. To generate these distributions we assumed that the area of slope failure depends on earthquake magnitude, i.e., that failure occurs simultaneously over the area affected by horizontal ground shaking, and does not cascade from nucleating points. Furthermore, the downslope movement of displaced sediments does not entrain significant amounts of additional material. Our simulations fit well the area distribution of landslide sources along the Atlantic continental margin, if we assume that the slope has been subjected to earthquakes of magnitude ??? 6.3. Regions of submarine landslides, whose area distributions obey inverse power laws, may be controlled by different generation mechanisms, such as the gradual development of fractures in the headwalls of cliffs. The observation of a large number of small subaerial landslides being triggered by a single earthquake is also compatible with the hypothesis that failure occurs simultaneously in many locations within the area affected by ground shaking. Unlike submarine landslides, which are found on large uniformly-dipping slopes, a single large landslide scarp cannot form on land because of the heterogeneous morphology and short slope distances of tectonically-active subaerial regions. However, for a given earthquake magnitude, the total area affected by subaerial landslides is comparable to that calculated by slope stability analysis for submarine landslides. The area distribution of subaerial landslides from a single event may be determined by the size distribution of the morphology of the affected area, not by the initiation process. ?? 2009 Elsevier B.V.
NASA Astrophysics Data System (ADS)
Shevtsov, S.; Zhilyaev, I.; Oganesyan, P.; Axenov, V.
2017-01-01
The glass/carbon fiber composites are widely used in the design of various aircraft and rotorcraft components such as fairings and cowlings, which have predominantly a shell-like geometry and are made of quasi-isotropic laminates. The main requirements to such the composite parts are the specified mechanical stiffness to withstand the non-uniform air pressure at the different flight conditions and reduce a level of noise caused by the airflow-induced vibrations at the constrained weight of the part. The main objective of present study is the optimization of wall thickness and lay-up of composite shell-like cowling. The present approach assumes conversion of the CAD model of the cowling surface to finite element (FE) representation, then its wind tunnel testing simulation at the different orientation of airflow to find the most stressed mode of flight. Numerical solutions of the Reynolds averaged Navier-Stokes (RANS) equations supplemented by k-w turbulence model provide the spatial distributions of air pressure applied to the shell surface. At the formulation of optimization problem the global strain energy calculated within the optimized shell was assumed as the objective. A wall thickness of the shell had to change over its surface to minimize the objective at the constrained weight. We used a parameterization of the problem that assumes an initiation of auxiliary sphere with varied radius and coordinates of the center, which were the design variables. Curve that formed by the intersection of the shell with sphere defined boundary of area, which should be reinforced by local thickening the shell wall. To eliminate a local stress concentration this increment was defined as the smooth function defined on the shell surface. As a result of structural optimization we obtained the thickness of shell's wall distribution, which then was used to design the draping and lay-up of composite prepreg layers. The global strain energy in the optimized cowling was reduced in2.5 times at the weight growth up to 15%, whereas the eigenfrequencies at the 6 first natural vibration modes have been increased by 5-15%. The present approach and developed programming tools that demonstrated a good efficiency and stability at the acceptable computational costs can be used to optimize a wide range of shell-like structures made of quasi-isotropic laminates.
Treatment of internal sources in the finite-volume ELLAM
Healy, R.W.; ,; ,; ,; ,; ,
2000-01-01
The finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) is a mass-conservative approach for solving the advection-dispersion equation. The method has been shown to be accurate and efficient for solving advection-dominated problems of solute transport in ground water in 1, 2, and 3 dimensions. Previous implementations of FVELLAM have had difficulty in representing internal sources because the standard assumption of lowest order Raviart-Thomas velocity field does not hold for source cells. Therefore, tracking of particles within source cells is problematic. A new approach has been developed to account for internal sources in FVELLAM. It is assumed that the source is uniformly distributed across a grid cell and that instantaneous mixing takes place within the cell, such that concentration is uniform across the cell at any time. Sub-time steps are used in the time-integration scheme to track mass outflow from the edges of the source cell. This avoids the need for tracking within the source cell. We describe the new method and compare results for a test problem with a wide range of cell Peclet numbers.
Three-dimensional finite-element analysis of chevron-notched fracture specimens
NASA Technical Reports Server (NTRS)
Raju, I. S.; Newman, J. C., Jr.
1984-01-01
Stress-intensity factors and load-line displacements were calculated for chevron-notched bar and rod fracture specimens using a three-dimensional finite-element analysis. Both specimens were subjected to simulated wedge loading (either uniform applied displacement or uniform applied load). The chevron-notch sides and crack front were assumed to be straight. Crack-length-to-specimen width ratios (a/w) ranged from 0.4 to 0.7. The width-to-thickness ratio (w/B) was 1.45 or 2. The bar specimens had a height-to-width ratio of 0.435 or 0.5. Finite-element models were composed of singularity elements around the crack front and 8-noded isoparametric elements elsewhere. The models had about 11,000 degrees of freedom. Stress-intensity factors were calculated by using a nodal-force method for distribution along the crack front and by using a compliance method for average values. The stress intensity factors and load-line displacements are presented and compared with experimental solutions from the literature. The stress intensity factors and load-line displacements were about 2.5 and 5 percent lower than the reported experimental values, respectively.
Modeling surface response of the Greenland Ice Sheet to interglacial climate
NASA Astrophysics Data System (ADS)
Rau, Dominik; Rogozhina, Irina
2013-04-01
We present a new parameterization of surface mass balance (SMB) of the Greenland Ice Sheet (GIS) under interglacial climate conditions validated against recent satellite observations on a regional scale. Based on detailed analysis of the modeled surface melting and refreezing rates, we conclude that the existing SMB parameterizations fail to capture either spatial pattern or amplitude of the observed surface response of the GIS. This is due to multiple simplifying assumptions adopted by the majority of modeling studies within the frame of the positive degree day method. Modeled spatial distribution of surface melting is found to be highly sensitive to a choice of daily temperature standard deviation (SD) and degree-day factors, which are generally assumed to have uniform distribution across the entire Greenland region. However, the use of uniform SD distribution and the range of commonly used SD values are absolutely unsupported by the ERA-40 and ERA-Interim climate data. In this region, SD distribution is highly inhomogeneous and characterized by low amplitudes during the summer months in the areas where most surface ice melting occurs. In addition, the use of identical degree day factors on both the eastern and western slopes of the GIS results in overestimation of surface runoff along the western coast of Greenland and significant underestimation along its eastern coast. Our approach is to make use of (i) spatially and seasonally variable SDs derived from ERA-40 and ERA-Interim time series, and (ii) spatially variable degree-day factors, measured across Greenland, Arctic Canada, Norway, Spitsbergen and Iceland. We demonstrate that the new approach is extremely efficient for modeling the evolution of the GIS during the observational period and the entire Holocene interglacial.
Bowtie filters for dedicated breast CT: Analysis of bowtie filter material selection.
Kontson, Kimberly; Jennings, Robert J
2015-09-01
For a given bowtie filter design, both the selection of material and the physical design control the energy fluence, and consequently the dose distribution, in the object. Using three previously described bowtie filter designs, the goal of this work is to demonstrate the effect that different materials have on the bowtie filter performance measures. Three bowtie filter designs that compensate for one or more aspects of the beam-modifying effects due to the differences in path length in a projection have been designed. The nature of the designs allows for their realization using a variety of materials. The designs were based on a phantom, 14 cm in diameter, composed of 40% fibroglandular and 60% adipose tissue. Bowtie design #1 is based on single material spectral matching and produces nearly uniform spectral shape for radiation incident upon the detector. Bowtie design #2 uses the idea of basis-material decomposition to produce the same spectral shape and intensity at the detector, using two different materials. With bowtie design #3, it is possible to eliminate the beam hardening effect in the reconstructed image by adjusting the bowtie filter thickness so that the effective attenuation coefficient for every ray is the same. Seven different materials were chosen to represent a range of chemical compositions and densities. After calculation of construction parameters for each bowtie filter design, a bowtie filter was created using each of these materials (assuming reasonable construction parameters were obtained), resulting in a total of 26 bowtie filters modeled analytically and in the penelope Monte Carlo simulation environment. Using the analytical model of each bowtie filter, design profiles were obtained and energy fluence as a function of fan-angle was calculated. Projection images with and without each bowtie filter design were also generated using penelope and reconstructed using FBP. Parameters such as dose distribution, noise uniformity, and scatter were investigated. Analytical calculations with and without each bowtie filter show that some materials for a given design produce bowtie filters that are too large for implementation in breast CT scanners or too small to accurately manufacture. Results also demonstrate the ability to manipulate the energy fluence distribution (dynamic range) by using different materials, or different combinations of materials, for a given bowtie filter design. This feature is especially advantageous when using photon counting detector technology. Monte Carlo simulation results from penelope show that all studied material choices for bowtie design #2 achieve nearly uniform dose distribution, noise uniformity index less than 5%, and nearly uniform scatter-to-primary ratio. These same features can also be obtained using certain materials with bowtie designs #1 and #3. With the three bowtie filter designs used in this work, the selection of material is an important design consideration. An appropriate material choice can improve image quality, dose uniformity, and dynamic range.
Bowtie filters for dedicated breast CT: Analysis of bowtie filter material selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontson, Kimberly, E-mail: Kimberly.Kontson@fda.hhs.gov; Jennings, Robert J.
Purpose: For a given bowtie filter design, both the selection of material and the physical design control the energy fluence, and consequently the dose distribution, in the object. Using three previously described bowtie filter designs, the goal of this work is to demonstrate the effect that different materials have on the bowtie filter performance measures. Methods: Three bowtie filter designs that compensate for one or more aspects of the beam-modifying effects due to the differences in path length in a projection have been designed. The nature of the designs allows for their realization using a variety of materials. The designsmore » were based on a phantom, 14 cm in diameter, composed of 40% fibroglandular and 60% adipose tissue. Bowtie design #1 is based on single material spectral matching and produces nearly uniform spectral shape for radiation incident upon the detector. Bowtie design #2 uses the idea of basis-material decomposition to produce the same spectral shape and intensity at the detector, using two different materials. With bowtie design #3, it is possible to eliminate the beam hardening effect in the reconstructed image by adjusting the bowtie filter thickness so that the effective attenuation coefficient for every ray is the same. Seven different materials were chosen to represent a range of chemical compositions and densities. After calculation of construction parameters for each bowtie filter design, a bowtie filter was created using each of these materials (assuming reasonable construction parameters were obtained), resulting in a total of 26 bowtie filters modeled analytically and in the PENELOPE Monte Carlo simulation environment. Using the analytical model of each bowtie filter, design profiles were obtained and energy fluence as a function of fan-angle was calculated. Projection images with and without each bowtie filter design were also generated using PENELOPE and reconstructed using FBP. Parameters such as dose distribution, noise uniformity, and scatter were investigated. Results: Analytical calculations with and without each bowtie filter show that some materials for a given design produce bowtie filters that are too large for implementation in breast CT scanners or too small to accurately manufacture. Results also demonstrate the ability to manipulate the energy fluence distribution (dynamic range) by using different materials, or different combinations of materials, for a given bowtie filter design. This feature is especially advantageous when using photon counting detector technology. Monte Carlo simulation results from PENELOPE show that all studied material choices for bowtie design #2 achieve nearly uniform dose distribution, noise uniformity index less than 5%, and nearly uniform scatter-to-primary ratio. These same features can also be obtained using certain materials with bowtie designs #1 and #3. Conclusions: With the three bowtie filter designs used in this work, the selection of material is an important design consideration. An appropriate material choice can improve image quality, dose uniformity, and dynamic range.« less
NASA Technical Reports Server (NTRS)
Bartrand, Timothy A.
1988-01-01
During the shutdown of the space shuttle main engine, oxygen flow is shut off from the fuel preburner and helium is used to push the residual oxygen into the combustion chamber. During this process a low frequency combustion instability, or chug, occurs. This chug has resulted in damage to the engine's augmented spark igniter due to backflow of the contents of the preburner combustion chamber into the oxidizer feed system. To determine possible causes and fixes for the chug, the fuel preburner was modeled as a heterogeneous stirred tank combustion chamber, a variable mass flow rate oxidizer feed system, a constant mass flow rate fuel feed system and an exit turbine. Within the combustion chamber gases were assumed perfectly mixed. To account for liquid in the combustion chamber, a uniform droplet distribution was assumed to exist in the chamber, with mean droplet diameter determined from an empirical relation. A computer program was written to integrate the resulting differential equations. Because chamber contents were assumed perfectly mixed, the fuel preburner model erroneously predicted that combustion would not take place during shutdown. The combustion rate model was modified to assume that all liquid oxygen that vaporized instantaneously combusted with fuel. Using this combustion model, the effect of engine parameters on chamber pressure oscillations during the SSME shutdown was calculated.
Resource heterogeneity can facilitate cooperation.
Kun, Ádám; Dieckmann, Ulf
2013-01-01
Although social structure is known to promote cooperation, by locally exposing selfish agents to their own deeds, studies to date assumed that all agents have access to the same level of resources. This is clearly unrealistic. Here we find that cooperation can be maintained when some agents have access to more resources than others. Cooperation can then emerge even in populations in which the temptation to defect is so strong that players would act fully selfishly if their resources were distributed uniformly. Resource heterogeneity can thus be crucial for the emergence and maintenance of cooperation. We also show that resource heterogeneity can hinder cooperation once the temptation to defect is significantly lowered. In all cases, the level of cooperation can be maximized by managing resource heterogeneity.
Topology for efficient information dissemination in ad-hoc networking
NASA Technical Reports Server (NTRS)
Jennings, E.; Okino, C. M.
2002-01-01
In this paper, we explore the information dissemination problem in ad-hoc wirless networks. First, we analyze the probability of successful broadcast, assuming: the nodes are uniformly distributed, the available area has a lower bould relative to the total number of nodes, and there is zero knowledge of the overall topology of the network. By showing that the probability of such events is small, we are motivated to extract good graph topologies to minimize the overall transmissions. Three algorithms are used to generate topologies of the network with guaranteed connectivity. These are the minimum radius graph, the relative neighborhood graph and the minimum spanning tree. Our simulation shows that the relative neighborhood graph has certain good graph properties, which makes it suitable for efficient information dissemination.
NASA Technical Reports Server (NTRS)
Eckert, E.R.G.; Livingood, John N.B.
1951-01-01
An approximate method for development of flow and thermal boundary layers in laminar regime on cylinders with arbitrary cross section and transpiration-cooled walls is obtained by use of Karman's integrated momentum equation and an analogous heat-flow equation. Incompressible flow with constant property values throughout boundary layer is assumed. Shape parameters for approximated velocity and temperature profiles and functions necessary for solution of boundary-layer equations are presented as charts, reducing calculations to a minimum. The method is applied to determine local heat-transfer coefficients and surface temperature-cooled turbine blades for a given flow rate. Coolant flow distributions necessary for maintaining uniform blade temperatures are also determined.
Transport of photons produced by lightning in clouds
NASA Technical Reports Server (NTRS)
Solakiewicz, Richard
1991-01-01
The optical effects of the light produced by lightning are of interest to atmospheric scientists for a number of reasons. Two techniques are mentioned which are used to explain the nature of these effects: Monte Carlo simulation; and an equivalent medium approach. In the Monte Carlo approach, paths of individual photons are simulated; a photon is said to be scattered if it escapes the cloud, otherwise it is absorbed. In the equivalent medium approach, the cloud is replaced by a single obstacle whose properties are specified by bulk parameters obtained by methods due to Twersky. Herein, Boltzmann transport theory is used to obtain photon intensities. The photons are treated like a Lorentz gas. Only elastic scattering is considered and gravitational effects are neglected. Water droplets comprising a cuboidal cloud are assumed to be spherical and homogeneous. Furthermore, it is assumed that the distribution of droplets in the cloud is uniform and that scattering by air molecules is neglible. The time dependence and five dimensional nature of this problem make it particularly difficult; neither analytic nor numerical solutions are known.
Effect of current on spectrum of breaking waves in water of finite depth
NASA Technical Reports Server (NTRS)
Tung, C. C.; Huang, N. E.
1987-01-01
This paper presents an approximate method to compute the mean value, the mean square value and the spectrum of waves in water of finite depth taking into account the effect of wave breaking with or without the presence of current. It is assumed that there exists a linear and Gaussian ideal wave train whose spectrum is first obtained using the wave energy flux balance equation without considering wave breaking. The Miche wave breaking criterion for waves in finite water depth is used to limit the wave elevation and establish an expression for the breaking wave elevation in terms of the elevation and its second time derivative of the ideal waves. Simple expressions for the mean value, the mean square value and the spectrum are obtained. These results are applied to the case in which a deep water unidirectional wave train, propagating normally towards a straight shoreline over gently varying sea bottom of parallel and straight contours, encounters an adverse steady current whose velocity is assumed to be uniformly distributed with depth. Numerical results are obtained and presented in graphical form.
Elastic properties of woven bone: effect of mineral content and collagen fibrils orientation.
García-Rodríguez, J; Martínez-Reina, J
2017-02-01
Woven bone is a type of tissue that forms mainly during fracture healing or fetal bone development. Its microstructure can be modeled as a composite with a matrix of mineral (hydroxyapatite) and inclusions of collagen fibrils with a more or less random orientation. In the present study, its elastic properties were estimated as a function of composition (degree of mineralization) and fibril orientation. A self-consistent homogenization scheme considering randomness of inclusions' orientation was used for this purpose. Lacuno-canalicular porosity in the form of periodically distributed void inclusions was also considered. Assuming collagen fibrils to be uniformly oriented in all directions led to an isotropic tissue with a Young's modulus [Formula: see text] GPa, which is of the same order of magnitude as that of woven bone in fracture calluses. By contrast, assuming fibrils to have a preferential orientation resulted in a Young's modulus in the preferential direction of 9-16 GPa depending on the mineral content of the tissue. These results are consistent with experimental evidence for woven bone in foetuses, where collagen fibrils are aligned to a certain extent.
NASA Technical Reports Server (NTRS)
Fontenla, J. M.; Avrett, E. H.; Loeser, R.
1990-01-01
The energy balance in the lower transition region is analyzed by constructing theoretical models which satisfy the energy balance constraint. The energy balance is achieved by balancing the radiative losses and the energy flowing downward from the corona. This energy flow is mainly in two forms: conductive heat flow and hydrogen ionization energy flow due to ambipolar diffusion. Hydrostatic equilibrium is assumed, and, in a first calculation, local mechanical heating and Joule heating are ignored. In a second model, some mechanical heating compatible with chromospheric energy-balance calculations is introduced. The models are computed for a partial non-LTE approach in which radiation departs strongly from LTE but particles depart from Maxwellian distributions only to first order. The results, which apply to cases where the magnetic field is either absent, or uniform and vertical, are compared with the observed Lyman lines and continuum from the average quiet sun. The approximate agreement suggests that this type of model can roughly explain the observed intensities in a physically meaningful way, assuming only a few free parameters specified as chromospheric boundary conditions.
3-D Spontaneous Rupture Simulations of the 2016 Kumamoto, Japan, Earthquake
NASA Astrophysics Data System (ADS)
Urata, Yumi; Yoshida, Keisuke; Fukuyama, Eiichi
2017-04-01
We investigated the M7.3 Kumamoto, Japan, earthquake to illuminate why and how the rupture of the main shock propagated successfully by 3-D dynamic rupture simulations, assuming a complicated fault geometry estimated based on the distributions of aftershocks. The M7.3 main shock occurred along the Futagawa and Hinagu faults. A few days before, three M6-class foreshocks occurred. Their hypocenters were located along by the Hinagu and Futagawa faults and their focal mechanisms were similar to those of the main shock; therefore, an extensive stress shadow can have been generated on the fault plane of the main shock. First, we estimated the geometry of the fault planes of the three foreshocks as well as that of the main shock based on the temporal evolution of relocated aftershock hypocenters. Then, we evaluated static stress changes on the main shock fault plane due to the occurrence of the three foreshocks assuming elliptical cracks with constant stress drops on the estimated fault planes. The obtained static stress change distribution indicated that the hypocenter of the main shock is located on the region with positive Coulomb failure stress change (ΔCFS) while ΔCFS in the shallow region above the hypocenter was negative. Therefore, these foreshocks could encourage the initiation of the main shock rupture and could hinder the rupture propagating toward the shallow region. Finally, we conducted 3-D dynamic rupture simulations of the main shock using the initial stress distribution, which was the sum of the static stress changes by these foreshocks and the regional stress field. Assuming a slip-weakening law with uniform friction parameters, we conducted 3-D dynamic rupture simulations by varying the friction parameters and the values of the principal stresses. We obtained feasible parameter ranges to reproduce the rupture propagation of the main shock consistent with those revealed by seismic waveform analyses. We also demonstrated that the free surface encouraged the slip evolution of the main shock.
Elastic strain field due to an inclusion of a polyhedral shape with a non-uniform lattice misfit
NASA Astrophysics Data System (ADS)
Nenashev, A. V.; Dvurechenskii, A. V.
2017-03-01
An analytical solution in a closed form is obtained for the three-dimensional elastic strain distribution in an unlimited medium containing an inclusion with a coordinate-dependent lattice mismatch (an eigenstrain). Quantum dots consisting of a solid solution with a spatially varying composition are examples of such inclusions. It is assumed that both the inclusion and the surrounding medium (the matrix) are elastically isotropic and have the same Young's modulus and Poisson ratio. The inclusion shape is supposed to be an arbitrary polyhedron, and the coordinate dependence of the lattice misfit, with respect to the matrix, is assumed to be a polynomial of any degree. It is shown that, both inside and outside the inclusion, the strain tensor is expressed as a sum of contributions of all faces, edges, and vertices of the inclusion. Each of these contributions, as a function of the observation point's coordinates, is a product of some polynomial and a simple analytical function, which is the solid angle subtended by the face from the observation point (for a contribution of a face), or the potential of the uniformly charged edge (for a contribution of an edge), or the distance from the vertex to the observation point (for a contribution of a vertex). The method of constructing the relevant polynomial functions is suggested. We also found out that similar expressions describe an electrostatic or gravitational potential, as well as its first and second derivatives, of a polyhedral body with a charge/mass density that depends on coordinates polynomially.
Sex ratios of fledgling and recaptured subadult spotted owls in the southern Sierra Nevada
George N. Steger
1995-01-01
Estimates of instantaneous growth rates (A) of spotted owl (Strix occidentalis) populations have been based on demographic data that uniformly assumed an equal sex ratio among fledglings. In this study, sex ratios of subadults, banded as juveniles, and fledgling California spotted owls (S. o. occidentalis) were observed and compared to an assumed 1 : 1 ratio. The...
Rapid learning of visual ensembles.
Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni
2017-02-01
We recently demonstrated that observers are capable of encoding not only summary statistics, such as mean and variance of stimulus ensembles, but also the shape of the ensembles. Here, for the first time, we show the learning dynamics of this process, investigate the possible priors for the distribution shape, and demonstrate that observers are able to learn more complex distributions, such as bimodal ones. We used speeding and slowing of response times between trials (intertrial priming) in visual search for an oddly oriented line to assess internal models of distractor distributions. Experiment 1 demonstrates that two repetitions are sufficient for enabling learning of the shape of uniform distractor distributions. In Experiment 2, we compared Gaussian and uniform distractor distributions, finding that following only two repetitions Gaussian distributions are represented differently than uniform ones. Experiment 3 further showed that when distractor distributions are bimodal (with a 30° distance between two uniform intervals), observers initially treat them as uniform, and only with further repetitions do they begin to treat the distributions as bimodal. In sum, observers do not have strong initial priors for distribution shapes and quickly learn simple ones but have the ability to adjust their representations to more complex feature distributions as information accumulates with further repetitions of the same distractor distribution.
Uniform irradiation of irregularly shaped cavities for photodynamic therapy.
Rem, A I; van Gemert, M J; van der Meulen, F W; Gijsbers, G H; Beek, J F
1997-03-01
It is difficult to achieve a uniform light distribution in irregularly shaped cavities. We have conducted a study on the use of hollow 'integrating' moulds for more uniform light delivery of photodynamic therapy in irregularly shaped cavities such as the oral cavity. Simple geometries such as a cubical box, a sphere, a cylinder and a 'bottle-neck' geometry have been investigated experimentally and the results have been compared with computed light distributions obtained using the 'radiosity method'. A high reflection coefficient of the mould and the best uniform direct irradiance possible on the inside of the mould were found to be important determinants for achieving a uniform light distribution.
Crater Topography on Titan: Implications for Landscape Evolution
NASA Technical Reports Server (NTRS)
Neish, Catherine D.; Kirk, R.L.; Lorenz, R. D.; Bray, V. J.; Schenk, P.; Stiles, B. W.; Turtle, E.; Mitchell, K.; Hayes, A.
2013-01-01
We present a comprehensive review of available crater topography measurements for Saturn's moon Titan. In general, the depths of Titan's craters are within the range of depths observed for similarly sized fresh craters on Ganymede, but several hundreds of meters shallower than Ganymede's average depth vs. diameter trend. Depth-to-diameter ratios are between 0.0012 +/- 0.0003 (for the largest crater studied, Menrva, D approximately 425 km) and 0.017 +/- 0.004 (for the smallest crater studied, Ksa, D approximately 39 km). When we evaluate the Anderson-Darling goodness-of-fit parameter, we find that there is less than a 10% probability that Titan's craters have a current depth distribution that is consistent with the depth distribution of fresh craters on Ganymede. There is, however, a much higher probability that the relative depths are uniformly distributed between 0 (fresh) and 1 (completely infilled). This distribution is consistent with an infilling process that is relatively constant with time, such as aeolian deposition. Assuming that Ganymede represents a close 'airless' analogue to Titan, the difference in depths represents the first quantitative measure of the amount of modification that has shaped Titan's surface, the only body in the outer Solar System with extensive surface-atmosphere exchange.
NASA Astrophysics Data System (ADS)
Chen, Xiao-jun; Dong, Li-zhi; Wang, Shuai; Yang, Ping; Xu, Bing
2017-11-01
In quadri-wave lateral shearing interferometry (QWLSI), when the intensity distribution of the incident light wave is non-uniform, part of the information of the intensity distribution will couple with the wavefront derivatives to cause wavefront reconstruction errors. In this paper, we propose two algorithms to reduce the influence of a non-uniform intensity distribution on wavefront reconstruction. Our simulation results demonstrate that the reconstructed amplitude distribution (RAD) algorithm can effectively reduce the influence of the intensity distribution on the wavefront reconstruction and that the collected amplitude distribution (CAD) algorithm can almost eliminate it.
NASA Astrophysics Data System (ADS)
Zhai, Guang; Shirzaei, Manoochehr
2017-12-01
Geodetic observations of surface deformation associated with volcanic activities can be used to constrain volcanic source parameters and their kinematics. Simple analytical models, such as point and spherical sources, are widely used to model deformation data. The inherent nature of oversimplified model geometries makes them unable to explain fine details of surface deformation. Current nonparametric, geometry-free inversion approaches resolve the distributed volume change, assuming it varies smoothly in space, which may detect artificial volume change outside magmatic source regions. To obtain a physically meaningful representation of an irregular volcanic source, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations, namely, outliers in the background crust. First, surface deformation data are inverted using a hybrid L1- and L2-norm regularization scheme to solve for sparse volume change distributions. Next, a boundary element method is implemented to solve for the displacement discontinuity distribution of the reservoir, which satisfies a uniform pressure boundary condition. The inversion approach is thoroughly validated using benchmark and synthetic tests, of which the results show that source dimension, depth, and shape can be recovered appropriately. We apply this modeling scheme to deformation observed at Kilauea summit for periods of uplift and subsidence leading to and following the 2007 Father's Day event. We find that the magmatic source geometries for these periods are statistically distinct, which may be an indicator that magma is released from isolated compartments due to large differential pressure leading to the rift intrusion.
Rubin, H.; Buddemeier, R.W.
2002-01-01
Part I of this study (Rubin, H.; Buddemeier, R.W. Groundwater Contamination Downstream of a Contaminant Penetration Site Part 1: Extension-Expansion of the Contaminant Plume. J. of Environmental Science and Health Part A (in press).) addressed cases, in which a comparatively thin contaminated region represented by boundary layers (BLs) developed within the freshwater aquifer close to contaminant penetration site. However, at some distance downstream from the penetration site, the top of the contaminant plume reaches the top or bottom of the aquifer. This is the location of the "attachment point," which comprises the entrance cross section of the domain evaluated by the present part of the study. It is shown that downstream from the entrance cross section, a set of two BLs develop in the aquifer, termed inner and outer BLs. It is assumed that the evaluated domain, in which the contaminant distribution gradually becomes uniform, can be divided into two sections, designated: (a) the restructuring section, and (b) the establishment section. In the restructuring section, the vertical concentration gradient leads to expansion of the inner BL at the expense of the outer BL, and there is almost no transfer of contaminant mass between the two layers. In the establishment section, each of the BLs occupies half of the aquifer thickness, and the vertical concentration gradient leads to transfer of contaminant mass from the inner to the outer BL. By use of BL approximations, changes of salinity distribution in the aquifer are calculated and evaluated. The establishment section ends at the uniformity point, downstream from which the contaminant concentration profile is practically uniform. The length of the restructuring section, as well as that of the establishment section, is approximately proportional to the aquifer thickness squared, and is inversely proportional to the transverse dispersivity. The study provides a convenient set of definitions and terminology that are helpful in visualizing the gradual development of uniform contaminant concentration distribution in an aquifer subject to contaminant plume penetration. The method developed in this study can be applied to a variety of problems associated with groundwater quality, such as initial evaluation of field data, design of field data collection, the identification of appropriate boundary conditions for numerical models, selection of appropriate numerical modeling approaches, interpretation and evaluation of field monitoring results, etc.
Levine, M W
1991-01-01
Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)
Testimonial Privileged Communication and the School Counselor
ERIC Educational Resources Information Center
Litwack, Lawrence; and others
1969-01-01
Briefly reviews literature and state laws regarding counselor legal status and client confidentiality. Expresses need for professional associations to assume leadership role in push for uniform legislation. (CJ)
On magnetohydrodynamic flow of second grade nanofluid over a nonlinear stretching sheet
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Aziz, Arsalan; Muhammad, Taseer; Ahmad, Bashir
2016-06-01
This research article addresses the magnetohydrodynamic (MHD) flow of second grade nanofluid over a nonlinear stretching sheet. Heat and mass transfer aspects are investigated through the thermophoresis and Brownian motion effects. Second grade fluid is assumed electrically conducting through a non-uniform applied magnetic field. Mathematical formulation is developed subject to small magnetic Reynolds number and boundary layer assumptions. Newly constructed condition having zero mass flux of nanoparticles at the boundary is incorporated. Transformations have been invoked for the reduction of partial differential systems into the set of nonlinear ordinary differential systems. The governing nonlinear systems have been solved for local behavior. Graphical results of different influential parameters are studied and discussed in detail. Computations for skin friction coefficient and local Nusselt number have been carried out. It is observed that the effects of thermophoresis parameter on the temperature and nanoparticles concentration distributions are qualitatively similar. The temperature and nanoparticles concentration distributions are enhanced for the larger magnetic parameter.
Detection and Estimation of an Optical Image by Photon-Counting Techniques. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Wang, Lily Lee
1973-01-01
Statistical description of a photoelectric detector is given. The photosensitive surface of the detector is divided into many small areas, and the moment generating function of the photo-counting statistic is derived for large time-bandwidth product. The detection of a specified optical image in the presence of the background light by using the hypothesis test is discussed. The ideal detector based on the likelihood ratio from a set of numbers of photoelectrons ejected from many small areas of the photosensitive surface is studied and compared with the threshold detector and a simple detector which is based on the likelihood ratio by counting the total number of photoelectrons from a finite area of the surface. The intensity of the image is assumed to be Gaussian distributed spatially against the uniformly distributed background light. The numerical approximation by the method of steepest descent is used, and the calculations of the reliabilities for the detectors are carried out by a digital computer.
Optics of Water Microdroplets with Soot Inclusions: Exact Versus Approximate Results
NASA Technical Reports Server (NTRS)
Liu, Li; Mishchenko, Michael I.
2016-01-01
We use the recently generalized version of the multi-sphere superposition T-matrix method (STMM) to compute the scattering and absorption properties of microscopic water droplets contaminated by black carbon. The soot material is assumed to be randomly distributed throughout the droplet interior in the form of numerous small spherical inclusions. Our numerically-exact STMM results are compared with approximate ones obtained using the Maxwell-Garnett effective-medium approximation (MGA) and the Monte Carlo ray-tracing approximation (MCRTA). We show that the popular MGA can be used to calculate the droplet optical cross sections, single-scattering albedo, and asymmetry parameter provided that the soot inclusions are quasi-uniformly distributed throughout the droplet interior, but can fail in computations of the elements of the scattering matrix depending on the volume fraction of soot inclusions. The integral radiative characteristics computed with the MCRTA can deviate more significantly from their exact STMM counterparts, while accurate MCRTA computations of the phase function require droplet size parameters substantially exceeding 60.
A steady-state model of the lunar ejecta cloud
NASA Astrophysics Data System (ADS)
Christou, Apostolos
2014-05-01
Every airless body in the solar system is surrounded by a cloud of ejecta produced by the impact of interplanetary meteoroids on its surface [1]. Such ``dust exospheres'' have been observed around the Galilean satellites of Jupiter [2,3]. The prospect of long-term robotic and human operations on the Moon by the US and other countries has rekindled interest on the subject [4]. This interest has culminated with the - currently ongoing - investigation of the Moon's dust exosphere by the LADEE spacecraft [5]. Here a model is presented of a ballistic, collisionless, steady state population of ejecta launched vertically at randomly distributed times and velocities and moving under constant gravity. Assuming a uniform distribution of launch times I derive closed form solutions for the probability density functions (pdfs) of the height distribution of particles and the distribution of their speeds in a rest frame both at the surface and at altitude. The treatment is then extended to particle motion with respect to a moving platform such as an orbiting spacecraft. These expressions are compared with numerical simulations under lunar surface gravity where the underlying ejection speed distribution is (a) uniform (b) a power law. I discuss the predictions of the model, its limitations, and how it can be validated against near-surface and orbital measurements.[1] Gault, D. Shoemaker, E.M., Moore, H.J., 1963, NASA TN-D 1767. [2] Kruger, H., Krivov, A.V., Hamilton, D. P., Grun, E., 1999, Nature, 399, 558. [3] Kruger, H., Krivov, A.V., Sremcevic, M., Grun, E., 2003, Icarus, 164, 170. [4] Grun, E., Horanyi, M., Sternovsky, Z., 2011, Planetary and Space Science, 59, 1672. [5] Elphic, R.C., Hine, B., Delory, G.T., Salute, J.S., Noble, S., Colaprete, A., Horanyi, M., Mahaffy, P., and the LADEE Science Team, 2014, LPSC XLV, LPI Contr. 1777, 2677.
Effects of Droplet Size on Intrusion of Sub-Surface Oil Spills
NASA Astrophysics Data System (ADS)
Adams, Eric; Chan, Godine; Wang, Dayang
2014-11-01
We explore effects of droplet size on droplet intrusion and transport in sub-surface oil spills. Negatively buoyant glass beads released continuously to a stratified ambient simulate oil droplets in a rising multiphase plume, and distributions of settled beads are used to infer signatures of surfacing oil. Initial tests used quiescent conditions, while ongoing tests simulate currents by towing the source and a bottom sled. Without current, deposited beads have a Gaussian distribution, with variance increasing with decreasing particle size. Distributions agree with a model assuming first order particle loss from an intrusion layer of constant thickness, and empirically determined flow rate. With current, deposited beads display a parabolic distribution similar to that expected from a source in uniform flow; we are currently comparing observed distributions with similar analytical models. Because chemical dispersants have been used to reduce oil droplet size, our study provides one measure of their effectiveness. Results are applied to conditions from the `Deep Spill' field experiment, and the recent Deepwater Horizon oil spill, and are being used to provide ``inner boundary conditions'' for subsequent far field modeling of these events. This research was made possible by grants from Chevron Energy Technology Co., through the Chevron-MITEI University Partnership Program, and BP/The Gulf of Mexico Research Initiative, GISR.
Does the central limit theorem always apply to phase noise? Some implications for radar problems
NASA Astrophysics Data System (ADS)
Gray, John E.; Addison, Stephen R.
2017-05-01
The phase noise problem or Rayleigh problem occurs in all aspects of radar. It is an effect that a radar engineer or physicist always has to take into account as part of a design or in attempt to characterize the physics of a problem such as reverberation. Normally, the mathematical difficulties of phase noise characterization are avoided by assuming the phase noise probability distribution function (PDF) is uniformly distributed, and the Central Limit Theorem (CLT) is invoked to argue that the superposition of relatively few random components obey the CLT and hence the superposition can be treated as a normal distribution. By formalizing the characterization of phase noise (see Gray and Alouani) for an individual random variable, the summation of identically distributed random variables is the product of multiple characteristic functions (CF). The product of the CFs for phase noise has a CF that can be analyzed to understand the limitations CLT when applied to phase noise. We mirror Kolmogorov's original proof as discussed in Papoulis to show the CLT can break down for receivers that gather limited amounts of data as well as the circumstances under which it can fail for certain phase noise distributions. We then discuss the consequences of this for matched filter design as well the implications for some physics problems.
Spherization of the remnants of asymmetrical SN explosions in a uniform medium
NASA Astrophysics Data System (ADS)
Bisnovatyi-Kogan, G. S.; Blinnikov, S. I.
A 'snow-plow' approximation is used to project a spherical shape for a supernova remnant (SNR) after a shock wave has traveled through a uniform medium following an asymmetrical SN explosion. The asymmetry arises as magnetorotation causes the explosion. It is assumed that the main part of the mass remains in a thin layer after the explosion and that the layer can be described by 1,5-dimensional hydrodynamics. The cavity pressure inside the shock is assumed much greater than the pressure of the outside medium. The snow-plow model accounts for asymmetrical particle velocities in the expanding layer and the tangential velocity averaged across the shock. The equations are configured to conserve mass and momentum and have specific initial conditions. The calculations are in agreement with observations of Cas A.
Evaluation of Aerodynamic Drag and Torque for External Tanks in Low Earth Orbit
Stone, William C.; Witzgall, Christoph
2006-01-01
A numerical procedure is described in which the aerodynamic drag and torque in low Earth orbit are calculated for a prototype Space Shuttle external tank and its components, the “LO2” and “LH2” tanks, carrying liquid oxygen and hydrogen, respectively, for any given angle of attack. Calculations assume the hypersonic limit of free molecular flow theory. Each shell of revolution is assumed to be described by a series of parametric equations for their respective contours. It is discretized into circular cross sections perpendicular to the axis of revolution, which yield a series of ellipses when projected according to the given angle of attack. The drag profile, that is, the projection of the entire shell is approximated by the convex envelope of those ellipses. The area of the drag profile, that is, the drag area, and its center of area moment, that is, the drag center, are then calculated and permit determination of the drag vector and the eccentricity vector from the center of gravity of the shell to the drag center. The aerodynamic torque is obtained as the cross product of those vectors. The tanks are assumed to be either evacuated or pressurized with a uniform internal gas distribution: dynamic shifting of the tank center of mass due to residual propellant sloshing is not considered. PMID:27274926
Rates of short-GRB afterglows in association with binary neutron star mergers
NASA Astrophysics Data System (ADS)
Saleem, M.; Pai, Archana; Misra, Kuntal; Resmi, L.; Arun, K. G.
2018-03-01
Assuming all binary neutron star (BNS) mergers produce short gamma-ray bursts, we combine the merger rates of BNS from population synthesis studies, the sensitivities of advanced gravitational wave (GW) interferometer networks, and of the electromagnetic (EM) facilities in various wavebands, to compute the detection rate of associated afterglows in these bands. Using the inclination angle measured from GWs as a proxy for the viewing angle and assuming a uniform distribution of jet opening angle between 3° and 30°, we generate light curves of the counterparts using the open access afterglow hydrodynamics package BOXFIT for X-ray, optical, and radio bands. For different EM detectors, we obtain the fraction of EM counterparts detectable in these three bands by imposing appropriate detection thresholds. In association with BNS mergers detected by five (three) detector networks of advanced GW interferometers, assuming a BNS merger rate of 0.6-774 Gpc-3 yr-1 from population synthesis models, we find the afterglow detection rates (per year) to be 0.04-53 (0.02-27), 0.03-36 (0.01-19), and 0.04-47 (0.02-25) in the X-ray, optical, and radio bands, respectively. Our rates represent maximum possible detections for the given BNS rate since we ignore effects of cadence and field of view in EM follow-up observations.
A new approach for the description of discharge extremes in small catchments
NASA Astrophysics Data System (ADS)
Pavia Santolamazza, Daniela; Lebrenz, Henning; Bárdossy, András
2017-04-01
Small catchment basins in Northwestern Switzerland, characterized by small concentration times, are frequently targeted by floods. The peak and the volume of these floods are commonly estimated by a frequency analysis of occurrence and described by a random variable, assuming a uniform distributed probability and stationary input drivers (e.g. precipitation, temperature). For these small catchments, we attempt to describe and identify the underlying mechanisms and dynamics at the occurrence of extremes by means of available high temporal resolution (10 min) observations and to explore the possibilities to regionalize hydrological parameters for short intervals. Therefore, we investigate new concepts for the flood description such as entropy as a measure of disorder and dispersion of precipitation. First findings and conclusions of this ongoing research are presented.
The mechanics of slithering locomotion.
Hu, David L; Nirody, Jasmine; Scott, Terri; Shelley, Michael J
2009-06-23
In this experimental and theoretical study, we investigate the slithering of snakes on flat surfaces. Previous studies of slithering have rested on the assumption that snakes slither by pushing laterally against rocks and branches. In this study, we develop a theoretical model for slithering locomotion by observing snake motion kinematics and experimentally measuring the friction coefficients of snakeskin. Our predictions of body speed show good agreement with observations, demonstrating that snake propulsion on flat ground, and possibly in general, relies critically on the frictional anisotropy of their scales. We have also highlighted the importance of weight distribution in lateral undulation, previously difficult to visualize and hence assumed uniform. The ability to redistribute weight, clearly of importance when appendages are airborne in limbed locomotion, has a much broader generality, as shown by its role in improving limbless locomotion.
NASA Technical Reports Server (NTRS)
Clark, D. M.; Hall, D. F.
1980-01-01
The significance of the fraction of the mass outgassed by a negatively charged space vehicle which is ionized within the vehicle plasma sheath and electrostatically reattracted to the space vehicle was determined. The ML-12 retarding potential analyzer/temperature controlled quartz crystal microbalances (RPA/TQCMs) distinguishes between charged and neutral molecules and investigates contamination mass transport mechanism. Two long term, quick look flight data sets indicate that on the average a significant fraction of mass arriving at one RPA/TQCM is ionized. It is assumed that vehicle frame charging during these periods was approximately uniformly distributed in degree and frequency. It is shown that electrostatic reattraction of ionized molecules is an important contamination mechanism at and near geosynchronous altitudes.
On a thermal analysis of a second stripper for rare isotope accelerator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Momozaki, Y.; Nolen, J.; Nuclear Engineering Division
2008-08-04
This memo summarizes simple calculations and results of the thermal analysis on the second stripper to be used in the driver linac of Rare Isotope Accelerator (RIA). Both liquid (Sodium) and solid (Titanium and Vanadium) stripper concepts were considered. These calculations were intended to provide basic information to evaluate the feasibility of liquid (thick film) and solid (rotating wheel) second strippers. Nuclear physics calculations to estimate the volumetric heat generation in the stripper material were performed by 'LISE for Excel'. In the thermal calculations, the strippers were modeled as a thin 2D plate with uniform heat generation within the beammore » spot. Then, temperature distributions were computed by assuming that the heat spreads conductively in the plate in radial direction without radiative heat losses to surroundings.« less
Spatial and temporal distribution of trunk-injected imidacloprid in apple tree canopies.
Aćimović, Srđan G; VanWoerkom, Anthony H; Reeb, Pablo D; Vandervoort, Christine; Garavaglia, Thomas; Cregg, Bert M; Wise, John C
2014-11-01
Pesticide use in orchards creates drift-driven pesticide losses which contaminate the environment. Trunk injection of pesticides as a target-precise delivery system could greatly reduce pesticide losses. However, pesticide efficiency after trunk injection is associated with the underinvestigated spatial and temporal distribution of the pesticide within the tree crown. This study quantified the spatial and temporal distribution of trunk-injected imidacloprid within apple crowns after trunk injection using one, two, four or eight injection ports per tree. The spatial uniformity of imidacloprid distribution in apple crowns significantly increased with more injection ports. Four ports allowed uniform spatial distribution of imidacloprid in the crown. Uniform and non-uniform spatial distributions were established early and lasted throughout the experiment. The temporal distribution of imidacloprid was significantly non-uniform. Upper and lower crown positions did not significantly differ in compound concentration. Crown concentration patterns indicated that imidacloprid transport in the trunk occurred through radial diffusion and vertical uptake with a spiral pattern. By showing where and when a trunk-injected compound is distributed in the apple tree canopy, this study addresses a key knowledge gap in terms of explaining the efficiency of the compound in the crown. These findings allow the improvement of target-precise pesticide delivery for more sustainable tree-based agriculture. © 2014 Society of Chemical Industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zubkov, Tykhon; Smith, R. Scott; Engstrom, Todd R.
2007-11-14
Tykhon Zubkov, R. Scott Smith, Todd R. Engstrom, and Bruce D. Kay The adsorption, desorption, and diffusion kinetics of N2 on thick (up to ~9 mm) porous films of amorphous solid water (ASW) films were studied using molecular beam techniques and temperature programmed desorption (TPD). Porous ASW films were grown on Pt(111) at low temperature (<30 K) from a collimated H2O beam at glancing incident angles. In thin films (<1 mm), the desorption kinetics are well described by a model that assumes rapid and uniform N2 distribution throughout the film. In thicker films, (>1 mm), N2 adsorption at 27 Kmore » results in a non-uniform distribution where most of N2 is trapped in the outer region of the film. Redistribution of N2 can be induced by thermal annealing. The apparent activation energy for this process is ~7 kJ/mol, which is approximately half of the desorption activation energy at the corresponding coverage. Blocking adsorption sites near the film surface facilitates transport into the film. Despite the onset of limited diffusion, the adsorption kinetics are efficient, precursor-mediated and independent of film thickness. An adsorption mechanism is proposed, in which a high-coverage N2 front propagates into a pore by the rapid transport of physisorbed 2nd layer N2 species on top of the 1st layer chemisorbed layer.« less
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.
Han, Qiyang; Wellner, Jon A
2016-01-01
In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES
Han, Qiyang; Wellner, Jon A.
2017-01-01
In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410
NASA Astrophysics Data System (ADS)
Lehujeur, Maximilien; Vergne, Jérôme; Maggi, Alessia; Schmittbuhl, Jean
2017-01-01
We developed and applied a method for ambient noise surface wave tomography that can deal with noise cross-correlation functions governed to first order by a non-uniform distribution of the ambient seismic noise sources. The method inverts the azimuthal distribution of noise sources that are assumed to be far from the network, together with the spatial variations of the phase and group velocities on an optimized irregular grid. Direct modelling of the two-sided noise correlation functions avoids dispersion curve picking on every station pair and minimizes analyst intervention. The method involves station pairs spaced by distances down to a fraction of a wavelength, thereby bringing additional information for tomography. After validating the method on synthetic data, we applied it to a set of long-term continuous waveforms acquired around the geothermal sites at Soultz-sous-Forêts and Rittershoffen (Northern Alsace, France). For networks with limited aperture, we show that taking the azimuthal variations of the noise energy into account has significant impact on the surface wave dispersion maps. We obtained regional phase and group velocity models in the 1-7 s period range, which is sensitive to the structures encompassing the geothermal reservoirs. The ambient noise in our dataset originates from two main directions, the northern Atlantic Ocean and the Mediterranean Sea, and is dominated by the first Rayleigh wave overtone in the 2-5 s period range.
Li, Ruiying; Ma, Wenting; Huang, Ning; Kang, Rui
2017-01-01
A sophisticated method for node deployment can efficiently reduce the energy consumption of a Wireless Sensor Network (WSN) and prolong the corresponding network lifetime. Pioneers have proposed many node deployment based lifetime optimization methods for WSNs, however, the retransmission mechanism and the discrete power control strategy, which are widely used in practice and have large effect on the network energy consumption, are often neglected and assumed as a continuous one, respectively, in the previous studies. In this paper, both retransmission and discrete power control are considered together, and a more realistic energy-consumption-based network lifetime model for linear WSNs is provided. Using this model, we then propose a generic deployment-based optimization model that maximizes network lifetime under coverage, connectivity and transmission rate success constraints. The more accurate lifetime evaluation conduces to a longer optimal network lifetime in the realistic situation. To illustrate the effectiveness of our method, both one-tiered and two-tiered uniformly and non-uniformly distributed linear WSNs are optimized in our case studies, and the comparisons between our optimal results and those based on relatively inaccurate lifetime evaluation show the advantage of our method when investigating WSN lifetime optimization problems.
NASA Astrophysics Data System (ADS)
Kunieda, Yuichi; Fukuda, Daiji; Ohno, Masashi; Takahashi, Hiroyuki; Nakazawa, Masaharu; Inou, Tadashi; Ataka, Manabu
2004-05-01
We are developing a high-energy-resolution X-ray microcalorimeter for X-ray fluorescent spectrometry using a superconducting transition edge sensor (TES) that consists of a bilayer of iridium and gold (Ir/Au). In this paper, we have studied the superconducting transition characteristics of two different bilayer structures. Type 1 is a simple stacked bilayer where a square-pattern film of iridium is covered with an identical pattern of gold. Type 2 is based on the Type 1 Ir/Au film, however, it has Au side banks. The resistance-temperature characteristics of these films are investigated by a four-wired resistance measurement method. As a result, the transition curve of Type 2 obeyed the Ginzburg-Landau (GL) theory; however, the transition curve of Type 1 was entirely different from that of Type 2. The reason there was a difference in these transition curves of the two devices is discussed in terms of the difference in the electric current distribution inside TESs. Even if we assume a uniform bilayer film and a uniform proximity effect over the entire film, the current density inside the device affects the characteristics of the transition curves.
A micromechanical approach for homogenization of elastic metamaterials with dynamic microstructure.
Muhlestein, Michael B; Haberman, Michael R
2016-08-01
An approximate homogenization technique is presented for generally anisotropic elastic metamaterials consisting of an elastic host material containing randomly distributed heterogeneities displaying frequency-dependent material properties. The dynamic response may arise from relaxation processes such as viscoelasticity or from dynamic microstructure. A Green's function approach is used to model elastic inhomogeneities embedded within a uniform elastic matrix as force sources that are excited by a time-varying, spatially uniform displacement field. Assuming dynamic subwavelength inhomogeneities only interact through their volume-averaged fields implies the macroscopic stress and momentum density fields are functions of both the microscopic strain and velocity fields, and may be related to the macroscopic strain and velocity fields through localization tensors. The macroscopic and microscopic fields are combined to yield a homogenization scheme that predicts the local effective stiffness, density and coupling tensors for an effective Willis-type constitutive equation. It is shown that when internal degrees of freedom of the inhomogeneities are present, Willis-type coupling becomes necessary on the macroscale. To demonstrate the utility of the homogenization technique, the effective properties of an isotropic elastic matrix material containing isotropic and anisotropic spherical inhomogeneities, isotropic spheroidal inhomogeneities and isotropic dynamic spherical inhomogeneities are presented and discussed.
A micromechanical approach for homogenization of elastic metamaterials with dynamic microstructure
Haberman, Michael R.
2016-01-01
An approximate homogenization technique is presented for generally anisotropic elastic metamaterials consisting of an elastic host material containing randomly distributed heterogeneities displaying frequency-dependent material properties. The dynamic response may arise from relaxation processes such as viscoelasticity or from dynamic microstructure. A Green's function approach is used to model elastic inhomogeneities embedded within a uniform elastic matrix as force sources that are excited by a time-varying, spatially uniform displacement field. Assuming dynamic subwavelength inhomogeneities only interact through their volume-averaged fields implies the macroscopic stress and momentum density fields are functions of both the microscopic strain and velocity fields, and may be related to the macroscopic strain and velocity fields through localization tensors. The macroscopic and microscopic fields are combined to yield a homogenization scheme that predicts the local effective stiffness, density and coupling tensors for an effective Willis-type constitutive equation. It is shown that when internal degrees of freedom of the inhomogeneities are present, Willis-type coupling becomes necessary on the macroscale. To demonstrate the utility of the homogenization technique, the effective properties of an isotropic elastic matrix material containing isotropic and anisotropic spherical inhomogeneities, isotropic spheroidal inhomogeneities and isotropic dynamic spherical inhomogeneities are presented and discussed. PMID:27616932
A micromechanical approach for homogenization of elastic metamaterials with dynamic microstructure
NASA Astrophysics Data System (ADS)
Muhlestein, Michael B.; Haberman, Michael R.
2016-08-01
An approximate homogenization technique is presented for generally anisotropic elastic metamaterials consisting of an elastic host material containing randomly distributed heterogeneities displaying frequency-dependent material properties. The dynamic response may arise from relaxation processes such as viscoelasticity or from dynamic microstructure. A Green's function approach is used to model elastic inhomogeneities embedded within a uniform elastic matrix as force sources that are excited by a time-varying, spatially uniform displacement field. Assuming dynamic subwavelength inhomogeneities only interact through their volume-averaged fields implies the macroscopic stress and momentum density fields are functions of both the microscopic strain and velocity fields, and may be related to the macroscopic strain and velocity fields through localization tensors. The macroscopic and microscopic fields are combined to yield a homogenization scheme that predicts the local effective stiffness, density and coupling tensors for an effective Willis-type constitutive equation. It is shown that when internal degrees of freedom of the inhomogeneities are present, Willis-type coupling becomes necessary on the macroscale. To demonstrate the utility of the homogenization technique, the effective properties of an isotropic elastic matrix material containing isotropic and anisotropic spherical inhomogeneities, isotropic spheroidal inhomogeneities and isotropic dynamic spherical inhomogeneities are presented and discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sogin, H.H.; Goldstein, R.J.
1960-02-01
Experiments were performed on mass transfer by forced convection from naphthalene strips on a flat plate to an air stream at ordinary temperature and pressure. Turbulence was induced in the boundary layer by means of a wire strip. In all cases there was a hydrodynamic starting length upstream of the strips. The ratio of this inert length to the total length was varied from about 0.80 to 0.96. The flow was practically incompressible with Reynolds number, based on the total length, varying from 175,000 to 486,000. The Schmidt number was 2.5. The experimental results fell in proximity to the Sebanmore » step function factor when they were reduced after the massmomentum analysis of Deissler and Loeffler for a surface of uniform vapor pressure. When Karman's formulation of the mass- momentum analogy was assumed, the data fell between the values predicted by the Seban and by the Rubesin expression for the step function factor. The results were well correlated by the Colburn analogy in conjunction with the Rubesin step function factor. (auth)« less
MEASURING THE ABUNDANCE OF SUB-KILOMETER-SIZED KUIPER BELT OBJECTS USING STELLAR OCCULTATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlichting, Hilke E.; Ofek, Eran O.; Gal-Yam, Avishay
2012-12-20
We present here the analysis of about 19,500 new star hours of low ecliptic latitude observations (|b| {<=} 20 Degree-Sign ) obtained by the Hubble Space Telescope's Fine Guidance Sensors over a time span of more than nine years, which is in addition to the {approx}12, 000 star hours previously analyzed by Schlichting et al. Our search for stellar occultations by small Kuiper Belt Objects (KBOs) yielded one new candidate event corresponding to a body with a 530 {+-} 70 m radius at a distance of about 40 AU. Using bootstrap simulations, we estimate a probability of Almost-Equal-To 5% thatmore » this event is due to random statistical fluctuations within the new data set. Combining this new event with the single KBO occultation reported by Schlichting et al. we arrive at the following results: (1) the ecliptic latitudes of 6. Degree-Sign 6 and 14. Degree-Sign 4 of the two events are consistent with the observed inclination distribution of larger, 100-km-sized KBOs. (2) Assuming that small, sub-kilometer-sized KBOs have the same ecliptic latitude distribution as their larger counterparts, we find an ecliptic surface density of KBOs with radii larger than 250 m of N(r > 250 m) = 1.1{sup +1.5}{sub -0.7} Multiplication-Sign 10{sup 7} deg{sup -2}; if sub-kilometer-sized KBOs have instead a uniform ecliptic latitude distribution for -20 Degree-Sign < b < 20 Degree-Sign then N(r > 250 m) = 4.4{sup +5.8}{sub -2.8} Multiplication-Sign 10{sup 6} deg{sup -2}. This is the best measurement of the surface density of sub-kilometer-sized KBOs to date. (3) Assuming the KBO size distribution can be well described by a single power law given by N(> r){proportional_to}r{sup 1-q}, where N(> r) is the number of KBOs with radii greater than r, and q is the power-law index, we find q = 3.8 {+-} 0.2 and q = 3.6 {+-} 0.2 for a KBO ecliptic latitude distribution that follows the observed distribution for larger, 100-km-sized KBOs and a uniform KBO ecliptic latitude distribution for -20 Degree-Sign < b < 20 Degree-Sign , respectively. (4) Regardless of the exact power law, our results suggest that small KBOs are numerous enough to satisfy the required supply rate for the Jupiter family comets. (5) We can rule out a single power law below the break with q > 4.0 at 2{sigma}, confirming a strong deficit of sub-kilometer-sized KBOs compared to a population extrapolated from objects with r > 45 km. This suggests that small KBOs are undergoing collisional erosion and that the Kuiper Belt is a true analog to the dust producing debris disks observed around other stars.« less
NASA Astrophysics Data System (ADS)
Larquier, S.; Ponomarenko, P.; Ribeiro, A. J.; Ruohoniemi, J. M.; Baker, J. B. H.; Sterne, K. T.; Lester, M.
2013-08-01
The midlatitude Super Dual Auroral Radar Network (SuperDARN) radars regularly observe nighttime low‒velocity Sub‒Auroral Ionospheric Scatter (SAIS) from decameter‒scale ionospheric density irregularities during quiet geomagnetic conditions. To establish the origin of the density irregularities responsible for low‒velocity SAIS, it is necessary to distinguish between the effects of high frequency (HF) propagation and irregularity occurrence itself on the observed backscatter distribution. We compare range, azimuth, and elevation data from the Blackstone SuperDARN radar with modeling results from ray tracing coupled with the International Reference Ionosphere assuming a uniform irregularity distribution. The observed and modeled distributions are shown to be very similar. The spatial distribution of backscattering is consistent with the requirement that HF rays propagate nearly perpendicular to the geomagnetic field lines (aspect angle ≤1°). For the first time, the irregularities responsible for low‒velocity SAIS are determined to extend between 200 and 300 km altitude, validating previous assumptions that low‒velocity SAIS is an F‒region phenomenon. We find that the limited spatial extent of this category of ionospheric backscatter within SuperDARN radars' fields‒of‒view is a consequence of HF propagation effects and the finite vertical extent of the scattering irregularities. We conclude that the density irregularities responsible for low‒velocity SAIS are widely distributed horizontally within the midlatitude ionosphere but are confined to the bottom‒side F‒region.
Effects of beam irregularity on uniform scanning
NASA Astrophysics Data System (ADS)
Kim, Chang Hyeuk; Jang, Sea duk; Yang, Tae-Keun
2016-09-01
An active scanning beam delivery method has many advantages in particle beam applications. For the beam is to be successfully delivered to the target volume by using the active scanning technique, the dose uniformity must be considered and should be at least 2.5% in the case of therapy application. During beam irradiation, many beam parameters affect the 2-dimensional uniformity at the target layer. A basic assumption in the beam irradiation planning stage is that the shape of the beam is symmetric and follows a Gaussian distribution. In this study, a pure Gaussian-shaped beam distribution was distorted by adding parasitic Gaussian distribution. An appropriate uniform scanning condition was deduced by using a quantitative analysis based on the gamma value of the distorted beam and 2-dimensional uniformities.
Intra-reach headwater fish assemblage structure
McKenna, James E.
2017-01-01
Large-scale conservation efforts can take advantage of modern large databases and regional modeling and assessment methods. However, these broad-scale efforts often assume uniform average habitat conditions and/or species assemblages within stream reaches.
Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.
Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R; Taylor, Jeremy F; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart
2016-01-01
Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal.
Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications
Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R.; Taylor, Jeremy F.; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart
2016-01-01
Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal. PMID:27583971
BUCKO- A BUCKLING ANALYSIS FOR RECTANGULAR PLATES WITH CENTRALLY LOCATED CUTOUTS
NASA Technical Reports Server (NTRS)
Nemeth, M. P.
1994-01-01
BUCKO is a computer program developed to predict the buckling load of a rectangular compression-loaded orthotropic plate with a centrally located cutout. The plate is assumed to be a balanced, symmetric laminate of uniform thickness. The cutout shape can be elliptical, circular, rectangular, or square. The BUCKO package includes sample data that demonstrates the essence of the program and its ease of usage. BUCKO uses an approximate one-dimensional formulation of the classical two-dimensional buckling problem following the Kantorovich method. The boundary conditions are considered to be simply supported unloaded edges and either clamped or simply supported loaded edges. The plate is loaded in uniaxial compression by either uniformly displacing or uniformly stressing two opposite edges of the plate. The BUCKO analysis consists of two parts: calculation of the inplane stress distribution prior to buckling, and calculation of the plate axial load and displacement at buckling. User input includes plate planform and cutout geometry, plate membrane and bending stiffnesses, finite difference parameters, boundary condition data, and loading data. Results generated by BUCKO are the prebuckling strain energy, inplane stress resultants, buckling mode shape, critical end shortening, and average axial and transverse strains at buckling. BUCKO is written in FORTRAN V for batch execution and has been implemented on a CDC CYBER 170 series computer operating under NOS with a central memory requirement of approximately 343K of 60 bit words. This program was developed in 1984 and was last updated in 1990.
The influence of the ionized medium on synchrotron emission in interstellar space.
NASA Technical Reports Server (NTRS)
Ramaty, R.
1972-01-01
The effect of the ionized gas on synchrotron emission in the interstellar medium is investigated. A detailed calculation of the synchrotron emissivity of cosmic electrons, assumed to have an isotropic pitch-angle distribution in a uniform magnetic field, is made as a function of frequency and observation angle with respect to the field. The results are presented both as a local emissivity and as an intensity, the latter obtained by neglecting free-free absorption in the interstellar medium and by assuming that the emissivity is constant along the line of sight. The comparison of these results with previous studies on the nature of the low-frequency turnover of the galactic nonthermal radio background reveals that, except if the component perpendicular to the line of sight of the interstellar magnetic field is small (less than 1 microgauss), or if the cosmic-ray electron spectrum is cut off at energies below a few hundred MeV, the suppression of synchrotron emission by the ambient electrons has in general a lesser effect than free-free absorption by these electrons, and that in some cases this suppression effect is almost entirely negligible.
The locking-decoding frontier for generic dynamics.
Dupuis, Frédéric; Florjanczyk, Jan; Hayden, Patrick; Leung, Debbie
2013-11-08
It is known that the maximum classical mutual information, which can be achieved between measurements on pairs of quantum systems, can drastically underestimate the quantum mutual information between them. In this article, we quantify this distinction between classical and quantum information by demonstrating that after removing a logarithmic-sized quantum system from one half of a pair of perfectly correlated bitstrings, even the most sensitive pair of measurements might yield only outcomes essentially independent of each other. This effect is a form of information locking but the definition we use is strictly stronger than those used previously. Moreover, we find that this property is generic, in the sense that it occurs when removing a random subsystem. As such, the effect might be relevant to statistical mechanics or black hole physics. While previous works had always assumed a uniform message, we assume only a min-entropy bound and also explore the effect of entanglement. We find that classical information is strongly locked almost until it can be completely decoded. Finally, we exhibit a quantum key distribution protocol that is 'secure' in the sense of accessible information but in which leakage of even a logarithmic number of bits compromises the secrecy of all others.
Electrically-induced stresses and deflection in multiple plates
NASA Astrophysics Data System (ADS)
Hu, Jih-Perng; Tichler, P. R.
1992-04-01
Thermohydraulic tests are being planned at the High Flux Beam Reactor of Brookhaven National Laboratory, in which direct electrical heating of metal plates will simulate decay heating in parallel plate-type fuel elements. The required currents are high if plates are made of metal with a low electrical resistance, such as aluminum. These high currents will induce either attractive or repulsive forces between adjacent current-carrying plates. Such forces, if strong enough, will cause the plates to deflect and so change the geometry of the coolant channel between the plates. Since this is undesirable, an analysis was made to evaluate the magnitude of the deflection and related stresses. In contrast to earlier publications in which either a concentrated or a uniform load was assumed, in this paper an exact force distribution on the plate is analytically solved and then used for stress and deflection calculations, assuming each plate to be a simply supported beam. Results indicate that due to superposition of the induced forces between plates in a multiple-and-parallel plate array, the maximum deflection and bending stress occur at the midpoint of the outermost plate. The maximum shear stress, which is inversely proportional to plate thickness, occurs at both ends of the outermost plate.
The locking-decoding frontier for generic dynamics
Dupuis, Frédéric; Florjanczyk, Jan; Hayden, Patrick; Leung, Debbie
2013-01-01
It is known that the maximum classical mutual information, which can be achieved between measurements on pairs of quantum systems, can drastically underestimate the quantum mutual information between them. In this article, we quantify this distinction between classical and quantum information by demonstrating that after removing a logarithmic-sized quantum system from one half of a pair of perfectly correlated bitstrings, even the most sensitive pair of measurements might yield only outcomes essentially independent of each other. This effect is a form of information locking but the definition we use is strictly stronger than those used previously. Moreover, we find that this property is generic, in the sense that it occurs when removing a random subsystem. As such, the effect might be relevant to statistical mechanics or black hole physics. While previous works had always assumed a uniform message, we assume only a min-entropy bound and also explore the effect of entanglement. We find that classical information is strongly locked almost until it can be completely decoded. Finally, we exhibit a quantum key distribution protocol that is ‘secure’ in the sense of accessible information but in which leakage of even a logarithmic number of bits compromises the secrecy of all others. PMID:24204183
Planck intermediate results: XXXIV. The magnetic field structure in the Rosette Nebula
Aghanim, N.; Alves, M. I. R.; Arnaud, M.; ...
2016-02-09
Planck has mapped the polarized dust emission over the whole sky, making it possible to trace the Galactic magnetic field structure that pervades the interstellar medium (ISM). In this paper, we combine polarization data from Planck with rotation measure (RM) observations towards a massive star-forming region, the Rosette Nebula in the Monoceros molecular cloud, to study its magnetic field structure and the impact of an expanding H ii region on the morphology of the field. We derive an analytical solution for the magnetic field, assumed to evolve from an initially uniform configuration following the expansion of ionized gas and themore » formation of a shell of swept-up ISM. From the RM data we estimate a mean value of the line-of-sight component of the magnetic field of about 3 μG (towards the observer) in the Rosette Nebula, for a uniform electron density of about 12 cm -3. The dust shell that surrounds the Rosette H ii region is clearly observed in the Planck intensity map at 353 GHz, with a polarization signal significantly different from that of the local background when considered asa whole. The Planck observations constrain the plane-of-the-sky orientation of the magnetic field in the Rosette’s parent molecular cloud to be mostly aligned with the large-scale field along the Galactic plane. The Planck data are compared with the analytical model, which predicts the mean polarization properties of a spherical and uniform dust shell for a given orientation of the field. This comparison leads to an upper limit of about 45° on the angle between the line of sight and the magnetic field in the Rosette complex, for an assumed intrinsic dust polarization fraction of 4%. This field direction can reproduce the RM values detected in the ionized region if the magnetic field strength in the Monoceros molecular cloud is in the range 6.5–9 μG. Finally, the present analytical model is able to reproduce the RM distribution across the ionized nebula, as well as the mean dust polarization properties of the swept-up shell, and can be directly applied to other similar objects.« less
Planck intermediate results: XXXIV. The magnetic field structure in the Rosette Nebula
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghanim, N.; Alves, M. I. R.; Arnaud, M.
Planck has mapped the polarized dust emission over the whole sky, making it possible to trace the Galactic magnetic field structure that pervades the interstellar medium (ISM). In this paper, we combine polarization data from Planck with rotation measure (RM) observations towards a massive star-forming region, the Rosette Nebula in the Monoceros molecular cloud, to study its magnetic field structure and the impact of an expanding H ii region on the morphology of the field. We derive an analytical solution for the magnetic field, assumed to evolve from an initially uniform configuration following the expansion of ionized gas and themore » formation of a shell of swept-up ISM. From the RM data we estimate a mean value of the line-of-sight component of the magnetic field of about 3 μG (towards the observer) in the Rosette Nebula, for a uniform electron density of about 12 cm -3. The dust shell that surrounds the Rosette H ii region is clearly observed in the Planck intensity map at 353 GHz, with a polarization signal significantly different from that of the local background when considered asa whole. The Planck observations constrain the plane-of-the-sky orientation of the magnetic field in the Rosette’s parent molecular cloud to be mostly aligned with the large-scale field along the Galactic plane. The Planck data are compared with the analytical model, which predicts the mean polarization properties of a spherical and uniform dust shell for a given orientation of the field. This comparison leads to an upper limit of about 45° on the angle between the line of sight and the magnetic field in the Rosette complex, for an assumed intrinsic dust polarization fraction of 4%. This field direction can reproduce the RM values detected in the ionized region if the magnetic field strength in the Monoceros molecular cloud is in the range 6.5–9 μG. Finally, the present analytical model is able to reproduce the RM distribution across the ionized nebula, as well as the mean dust polarization properties of the swept-up shell, and can be directly applied to other similar objects.« less
Evaluation of dripper clogging using magnetic water in drip irrigation
NASA Astrophysics Data System (ADS)
Khoshravesh, Mojtaba; Mirzaei, Sayyed Mohammad Javad; Shirazi, Pooya; Valashedi, Reza Norooz
2018-06-01
This study was performed to investigate the uniformity of distribution of water and discharge variations in drip irrigation using magnetic water. Magnetic water was achieved by transition of water using a robust permanent magnet connected to a feed pipeline. Two main factors including magnetic and non-magnetic water and three sub-factor of salt concentration including well water, addition of 150 and 300 mg L-1 calcium carbonate to irrigation water with three replications were applied. The result of magnetic water on average dripper discharge was significant at ( P ≤ 0.05). At the final irrigation, the average dripper discharge and distribution uniformity were higher for the magnetic water compared to the non-magnetic water. The magnetic water showed a significant effect ( P ≤ 0.01) on distribution uniformity of drippers. At the first irrigation, the water distribution uniformity was almost the same for both the magnetic water and the non-magnetic water. The use of magnetic water for drip irrigation is recommended to achieve higher uniformity.
On the Structure of a Best Possible Crossover Selection Strategy in Genetic Algorithms
NASA Astrophysics Data System (ADS)
Lässig, Jörg; Hoffmann, Karl Heinz
The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover to find a solution with high fitness for a given optimization problem. Many different schemes have been described in the literature as possible strategies for this task but so far comparisons have been predominantly empirical. It is shown that if one wishes to maximize any linear function of the final state probabilities, e.g. the fitness of the best individual in the final population of the algorithm, then a best probability distribution for selecting an individual in each generation is a rectangular distribution over the individuals sorted in descending sequence by their fitness values. This means uniform probabilities have to be assigned to a group of the best individuals of the population but probabilities equal to zero to individuals with lower fitness, assuming that the probability distribution to choose individuals from the current population can be chosen independently for each iteration and each individual. This result is then generalized also to typical practically applied performance measures, such as maximizing the expected fitness value of the best individual seen in any generation.
Contractors on Deployed Military Operations: United Kingdom Policy and Doctrine
2005-09-01
contractor’s workforce than military personnel, and contractors cannot be disciplined for violations of the Uniform Code of Military Justice.101 Moreover...objectives are met.151 The MoD’s “Defence Agencies” currently employ 60 percent of MoD’s civilian workforce and 11 percent of total uniformed personnel...operations by staff drawn from the contractor’s workforce who are reservist members of the Armed Forces.”177 With these initiatives, MoD assumes it has
Deflections of Uniformly Loaded Floors. A Beam-Spring Analog.
1984-09-01
joist floor systems have long been analyzed and Recently, the FEAFLO program was used to predict the designed by assuming that the joists act as...simple beams in behavior of floors constructed with joists whose properties carrying the design load. This simple method neglects many were determined in...uniform joist properties.) Designated N-3 for the floor with ’. nailed sheathing and G-3 for the floor with the sheathing 02 attached by means of a rigid
Model-based VQ for image data archival, retrieval and distribution
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1995-01-01
An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.
Crater topography on Titan: implications for landscape evolution
Neish, Catherine D.; Kirk, R.L.; Lorenz, R.D.; Bray, V.J.; Schenk, P.; Stiles, B.W.; Turtle, E.; Mitchell, Ken; Hayes, A.
2013-01-01
We present a comprehensive review of available crater topography measurements for Saturn’s moon Titan. In general, the depths of Titan’s craters are within the range of depths observed for similarly sized fresh craters on Ganymede, but several hundreds of meters shallower than Ganymede’s average depth vs. diameter trend. Depth-to-diameter ratios are between 0.0012 ± 0.0003 (for the largest crater studied, Menrva, D ~ 425 km) and 0.017 ± 0.004 (for the smallest crater studied, Ksa, D ~ 39 km). When we evaluate the Anderson–Darling goodness-of-fit parameter, we find that there is less than a 10% probability that Titan’s craters have a current depth distribution that is consistent with the depth distribution of fresh craters on Ganymede. There is, however, a much higher probability that the relative depths are uniformly distributed between 0 (fresh) and 1 (completely infilled). This distribution is consistent with an infilling process that is relatively constant with time, such as aeolian deposition. Assuming that Ganymede represents a close ‘airless’ analogue to Titan, the difference in depths represents the first quantitative measure of the amount of modification that has shaped Titan’s surface, the only body in the outer Solar System with extensive surface–atmosphere exchange.
49 CFR 24.6 - Administration of jointly-funded projects.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Agency, then the Lead Agency shall designate one of such Agencies to assume the cognizant role. At a... in compliance with the provisions of the Uniform Act and this part. All federally-assisted activities...
NASA Astrophysics Data System (ADS)
Jasenak, Brian
2017-02-01
Ultraviolet light-emitting diode (UV LED) adoption is accelerating; they are being used in new applications such as UV curing, germicidal irradiation, nondestructive testing, and forensic analysis. In many of these applications, it is critically important to produce a uniform light distribution and consistent surface irradiance. Flat panes of fused quartz, silica, or glass are commonly used to cover and protect UV LED arrays. However, they don't offer the advantages of an optical lens design. An investigation was conducted to determine the effect of a secondary glass optic on the uniformity of the light distribution and irradiance. Glass optics capable of transmitting UV-A, UV-B, and UV-C wavelengths can improve light distribution, uniformity, and intensity. In this work, two simulation studies were created to illustrate distinct irradiance patterns desirable for potential real world applications. The first study investigates the use of a multi-UV LED array and optic to create a uniform irradiance pattern on the flat two dimensional (2D) target surface. The uniformity was improved by designing both the LED array and molded optic to produce a homogenous pattern. The second study investigated the use of an LED light source and molded optic to improve the light uniformity on the inside of a canister. The case study illustrates the requirements for careful selection of LED based on light distribution and subsequent design of optics. The optic utilizes total internal reflection to create optimized light distribution. The combination of the LED and molded optic showed significant improvement in uniformity on the inner surface of the canister. The simulations illustrate how the application of optics can significantly improve UV light distribution which can be critical in applications such as UV curing and sterilization.
Premixing quality and flame stability: A theoretical and experimental study
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.; Heywood, J. B.; Tabaczynski, R. J.
1979-01-01
Models for predicting flame ignition and blowout in a combustor primary zone are presented. A correlation for the blowoff velocity of premixed turbulent flames is developed using the basic quantities of turbulent flow, and the laminar flame speed. A statistical model employing a Monte Carlo calculation procedure is developed to account for nonuniformities in a combustor primary zone. An overall kinetic rate equation is used to describe the fuel oxidation process. The model is used to predict the lean ignition and blow out limits of premixed turbulent flames; the effects of mixture nonuniformity on the lean ignition limit are explored using an assumed distribution of fuel-air ratios. Data on the effects of variations in inlet temperature, reference velocity and mixture uniformity on the lean ignition and blowout limits of gaseous propane-air flames are presented.
NASA Astrophysics Data System (ADS)
Iveson, Simon M.
2003-06-01
Pietruszczak and coworkers (Internat. J. Numer. Anal. Methods Geomech. 1994; 18(2):93-105; Comput. Geotech. 1991; 12( ):55-71) have presented a continuum-based model for predicting the dynamic mechanical response of partially saturated granular media with viscous interstitial liquids. In their model they assume that the gas phase is distributed uniformly throughout the medium as discrete spherical air bubbles occupying the voids between the particles. However, their derivation of the air pressure inside these gas bubbles is inconsistent with their stated assumptions. In addition the resultant dependence of gas pressure on liquid saturation lies outside of the plausible range of possible values for discrete air bubbles. This results in an over-prediction of the average bulk modulus of the void phase. Corrected equations are presented.
3-D dynamic rupture simulations of the 2016 Kumamoto, Japan, earthquake
NASA Astrophysics Data System (ADS)
Urata, Yumi; Yoshida, Keisuke; Fukuyama, Eiichi; Kubo, Hisahiko
2017-11-01
Using 3-D dynamic rupture simulations, we investigated the 2016 Mw7.1 Kumamoto, Japan, earthquake to elucidate why and how the rupture of the main shock propagated successfully, assuming a complicated fault geometry estimated on the basis of the distributions of the aftershocks. The Mw7.1 main shock occurred along the Futagawa and Hinagu faults. Within 28 h before the main shock, three M6-class foreshocks occurred. Their hypocenters were located along the Hinagu and Futagawa faults, and their focal mechanisms were similar to that of the main shock. Therefore, an extensive stress shadow should have been generated on the fault plane of the main shock. First, we estimated the geometry of the fault planes of the three foreshocks as well as that of the main shock based on the temporal evolution of the relocated aftershock hypocenters. We then evaluated the static stress changes on the main shock fault plane that were due to the occurrence of the three foreshocks, assuming elliptical cracks with constant stress drops on the estimated fault planes. The obtained static stress change distribution indicated that Coulomb failure stress change (ΔCFS) was positive just below the hypocenter of the main shock, while the ΔCFS in the shallow region above the hypocenter was negative. Therefore, these foreshocks could encourage the initiation of the main shock rupture and could hinder the propagation of the rupture toward the shallow region. Finally, we conducted 3-D dynamic rupture simulations of the main shock using the initial stress distribution, which was the sum of the static stress changes caused by these foreshocks and the regional stress field. Assuming a slip-weakening law with uniform friction parameters, we computed 3-D dynamic rupture by varying the friction parameters and the values of the principal stresses. We obtained feasible parameter ranges that could reproduce the characteristic features of the main shock rupture revealed by seismic waveform analyses. We also observed that the free surface encouraged the slip evolution of the main shock.[Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Forestier, M.; Haldenwang, P.
We consider free convection driven by a heated vertical plate immersed in a nonlinearly stratified medium. The plate supplies a uniform horizontal heat flux to a fluid, the bulk of which has a stable stratification, characterized by a non-uniform vertical temperature gradient. This gradient is assumed to have a typical length scale of variation, denoted Z0, while 0, and the physical properties of the medium.We then apply the new theory to the natural convection affecting the vapour phase in a liquefied pure gas tank (e.g. the cryogenic storage of hydrogen). It is assumed that the cylindrical storage tank is subject to a constant uniform heat flux on its lateral and top walls. We are interested in the vapour motion above a residual layer of liquid in equilibrium with the vapour. High-precision axisymmetric numerical computations show that the flow remains steady for a large range of parameters, and that a bulk stratification characterized by a quadratic temperature profile is undoubtedly present. The application of the theory permits a comparison of the numerical and analytic results, showing that the theory satisfactorily predicts the primary dynamical and thermal properties of the storage tank.
Söllner, Anke; Bröder, Arndt; Glöckner, Andreas; Betsch, Tilmann
2014-02-01
When decision makers are confronted with different problems and situations, do they use a uniform mechanism as assumed by single-process models (SPMs) or do they choose adaptively from a set of available decision strategies as multiple-strategy models (MSMs) imply? Both frameworks of decision making have gathered a lot of support, but only rarely have they been contrasted with each other. Employing an information intrusion paradigm for multi-attribute decisions from givens, SPM and MSM predictions on information search, decision outcomes, attention, and confidence judgments were derived and tested against each other in two experiments. The results consistently support the SPM view: Participants seemingly using a "take-the-best" (TTB) strategy do not ignore TTB-irrelevant information as MSMs would predict, but adapt the amount of information searched, choose alternative choice options, and show varying confidence judgments contingent on the quality of the "irrelevant" information. The uniformity of these findings underlines the adequacy of the novel information intrusion paradigm and comprehensively promotes the notion of a uniform decision making mechanism as assumed by single-process models. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, L. F.; Chen, D. Y.; Wang, Q.; Li, H.; Zhao, Z. G.
2018-01-01
A preparation technology of ultra-thin Carbon-fiber paper is reported. Carbon fiber distribution homogeneity has a great influence on the properties of ultra-thin Carbon-fiber paper. In this paper, a self-developed homogeneity analysis system is introduced to assist users to evaluate the distribution homogeneity of Carbon fiber among two or more two-value images of carbon-fiber paper. A relative-uniformity factor W/H is introduced. The experimental results show that the smaller the W/H factor, the higher uniformity of the distribution of Carbon fiber is. The new uniformity-evaluation method provides a practical and reliable tool for analyzing homogeneity of materials.
Scale Mixture Models with Applications to Bayesian Inference
NASA Astrophysics Data System (ADS)
Qin, Zhaohui S.; Damien, Paul; Walker, Stephen
2003-11-01
Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.
The extension of a uniform canopy reflectance model to include row effects
NASA Technical Reports Server (NTRS)
Suits, G. H. (Principal Investigator)
1981-01-01
The effect of row structure is assumed to be caused by the variation in density of vegetation across rows rather than to a profile in canopy height. The calculation of crop reflectance using vegetation density modulation across rows follows a parallel procedure to that for a uniform canopy. Predictions using the row model for wheat show that the effect of changes in sun to row azimuth are greatest in Landsat Band 5 (red band) and can result in underestimation of crop vigor.
Fragmentation of a Filamentary Cloud Permeated by a Perpendicular Magnetic Field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanawa, Tomoyuki; Kudoh, Takahiro; Tomisaka, Kohji
We examine the linear stability of an isothermal filamentary cloud permeated by a perpendicular magnetic field. Our model cloud is assumed to be supported by gas pressure against self-gravity in the unperturbed state. For simplicity, the density distribution is assumed to be symmetric around the axis. Also for simplicity, the initial magnetic field is assumed to be uniform, and turbulence is not taken into account. The perturbation equation is formulated to be an eigenvalue problem. The growth rate is obtained as a function of the wavenumber for fragmentation along the axis and the magnetic field strength. The growth rate dependsmore » critically on the outer boundary. If the displacement vanishes in regions very far from the cloud axis (fixed boundary), cloud fragmentation is suppressed by a moderate magnetic field, which means the plasma beta is below 1.67 on the cloud axis. If the displacement is constant along the magnetic field in regions very far from the cloud, the cloud is unstable even when the magnetic field is infinitely strong. The cloud is deformed by circulation in the plane perpendicular to the magnetic field. The unstable mode is not likely to induce dynamical collapse, since it is excited even when the whole cloud is magnetically subcritical. For both boundary conditions, the magnetic field increases the wavelength of the most unstable mode. We find that the magnetic force suppresses compression perpendicular to the magnetic field especially in regions of low density.« less
Shizgal, Bernie D
2018-05-01
This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988)JSTPBS0022-471510.1007/BF01016429].
NASA Astrophysics Data System (ADS)
Shizgal, Bernie D.
2018-05-01
This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988), 10.1007/BF01016429].
NASA Astrophysics Data System (ADS)
Flynn, Ryan
2007-12-01
The distribution of biological characteristics such as clonogen density, proliferation, and hypoxia throughout tumors is generally non-uniform, therefore it follows that the optimal dose prescriptions should also be non-uniform and tumor-specific. Advances in intensity modulated x-ray therapy (IMXT) technology have made the delivery of custom-made non-uniform dose distributions possible in practice. Intensity modulated proton therapy (IMPT) has the potential to deliver non-uniform dose distributions as well, while significantly reducing normal tissue and organ at risk dose relative to IMXT. In this work, a specialized treatment planning system was developed for the purpose of optimizing and comparing biologically based IMXT and IMPT plans. The IMXT systems of step-and-shoot (IMXT-SAS) and helical tomotherapy (IMXT-HT) and the IMPT systems of intensity modulated spot scanning (IMPT-SS) and distal gradient tracking (IMPT-DGT), were simulated. A thorough phantom study was conducted in which several subvolumes, which were contained within a base tumor region, were boosted or avoided with IMXT and IMPT. Different boosting situations were simulated by varying the size, proximity, and the doses prescribed to the subvolumes, and the size of the phantom. IMXT and IMPT were also compared for a whole brain radiation therapy (WBRT) case, in which a brain metastasis was simultaneously boosted and the hippocampus was avoided. Finally, IMXT and IMPT dose distributions were compared for the case of non-uniform dose prescription in a head and neck cancer patient that was based on PET imaging with the Cu(II)-diacetyl-bis(N4-methylthiosemicarbazone (Cu-ATSM) hypoxia marker. The non-uniform dose distributions within the tumor region were comparable for IMXT and IMPT. IMPT, however, was capable of delivering the same non-uniform dose distributions within a tumor using a 180° arc as for a full 360° rotation, which resulted in the reduction of normal tissue integral dose by a factor of up to three relative to IMXT, and the complete sparing of organs at risk distal to the tumor region.
Three-phase boundary length in solid-oxide fuel cells: A mathematical model
NASA Astrophysics Data System (ADS)
Janardhanan, Vinod M.; Heuveline, Vincent; Deutschmann, Olaf
A mathematical model to calculate the volume specific three-phase boundary length in the porous composite electrodes of solid-oxide fuel cell is presented. The model is exclusively based on geometrical considerations accounting for porosity, particle diameter, particle size distribution, and solids phase distribution. Results are presented for uniform particle size distribution as well as for non-uniform particle size distribution.
Spatial Burnout in Water Reactors with Nonuniform Startup Distributions of Uranium and Boron
NASA Technical Reports Server (NTRS)
Fox, Thomas A.; Bogart, Donald
1955-01-01
Spatial burnout calculations have been made of two types of water moderated cylindrical reactor using boron as a burnable poison to increase reactor life. Specific reactors studied were a version of the Submarine Advanced Reactor (sAR) and a supercritical water reactor (SCW) . Burnout characteristics such as reactivity excursion, neutron-flux and heat-generation distributions, and uranium and boron distributions have been determined for core lives corresponding to a burnup of approximately 7 kilograms of fully enriched uranium. All reactivity calculations have been based on the actual nonuniform distribution of absorbers existing during intervals of core life. Spatial burnout of uranium and boron and spatial build-up of fission products and equilibrium xenon have been- considered. Calculations were performed on the NACA nuclear reactor simulator using two-group diff'usion theory. The following reactor burnout characteristics have been demonstrated: 1. A significantly lower excursion in reactivity during core life may be obtained by nonuniform rather than uniform startup distribution of uranium. Results for SCW with uranium distributed to provide constant radial heat generation and a core life corresponding to a uranium burnup of 7 kilograms indicated a maximum excursion in reactivity of 2.5 percent. This compared to a maximum excursion of 4.2 percent obtained for the same core life when w'anium was uniformly distributed at startup. Boron was incorporated uniformly in these cores at startup. 2. It is possible to approach constant radial heat generation during the life of a cylindrical core by means of startup nonuniform radial and axial distributions of uranium and boron. Results for SCW with nonuniform radial distribution of uranium to provide constant radial heat generation at startup and with boron for longevity indicate relatively small departures from the initially constant radial heat generation distribution during core life. Results for SAR with a sinusoidal distribution rather than uniform axial distributions of boron indicate significant improvements in axial heat generation distribution during the greater part of core life. 3. Uranium investments for cylindrical reactors with nonuniform radial uranium distributions which provide constant radial heat generation per unit core volume are somewhat higher than for reactors with uniform uranium concentration at startup. On the other hand, uranium investments for reactors with axial boron distributions which approach constant axial heat generation are somewhat smaller than for reactors with uniform boron distributions at startup.
The current impact flux on Mars and its seasonal variation
NASA Astrophysics Data System (ADS)
JeongAhn, Youngmin; Malhotra, Renu
2015-12-01
We calculate the present-day impact flux on Mars and its variation over the martian year, using the current data on the orbital distribution of known Mars-crossing minor planets. We adapt the Öpik-Wetherill formulation for calculating collision probabilities, paying careful attention to the non-uniform distribution of the perihelion longitude and the argument of perihelion owed to secular planetary perturbations. We find that, at the current epoch, the Mars crossers have an axial distribution of the argument of perihelion, and the mean direction of their eccentricity vectors is nearly aligned with Mars' eccentricity vector. These previously neglected angular non-uniformities have the effect of depressing the mean annual impact flux by a factor of about 2 compared to the estimate based on a uniform random distribution of the angular elements of Mars-crossers; the amplitude of the seasonal variation of the impact flux is likewise depressed by a factor of about 4-5. We estimate that the flux of large impactors (of absolute magnitude H < 16) within ±30° of Mars' aphelion is about three times larger than when the planet is near perihelion. Extrapolation of our results to a model population of meter-size Mars-crossers shows that if these small impactors have a uniform distribution of their angular elements, then their aphelion-to-perihelion impact flux ratio would be 11-15, but if they track the orbital distribution of the large impactors, including their non-uniform angular elements, then this ratio would be about 3. Comparison of our results with the current dataset of fresh impact craters on Mars (detected with Mars-orbiting spacecraft) appears to rule out the uniform distribution of angular elements.
Application of Statistically Derived CPAS Parachute Parameters
NASA Technical Reports Server (NTRS)
Romero, Leah M.; Ray, Eric S.
2013-01-01
The Capsule Parachute Assembly System (CPAS) Analysis Team is responsible for determining parachute inflation parameters and dispersions that are ultimately used in verifying system requirements. A model memo is internally released semi-annually documenting parachute inflation and other key parameters reconstructed from flight test data. Dispersion probability distributions published in previous versions of the model memo were uniform because insufficient data were available for determination of statistical based distributions. Uniform distributions do not accurately represent the expected distributions since extreme parameter values are just as likely to occur as the nominal value. CPAS has taken incremental steps to move away from uniform distributions. Model Memo version 9 (MMv9) made the first use of non-uniform dispersions, but only for the reefing cutter timing, for which a large number of sample was available. In order to maximize the utility of the available flight test data, clusters of parachutes were reconstructed individually starting with Model Memo version 10. This allowed for statistical assessment for steady-state drag area (CDS) and parachute inflation parameters such as the canopy fill distance (n), profile shape exponent (expopen), over-inflation factor (C(sub k)), and ramp-down time (t(sub k)) distributions. Built-in MATLAB distributions were applied to the histograms, and parameters such as scale (sigma) and location (mu) were output. Engineering judgment was used to determine the "best fit" distribution based on the test data. Results include normal, log normal, and uniform (where available data remains insufficient) fits of nominal and failure (loss of parachute and skipped stage) cases for all CPAS parachutes. This paper discusses the uniform methodology that was previously used, the process and result of the statistical assessment, how the dispersions were incorporated into Monte Carlo analyses, and the application of the distributions in trajectory benchmark testing assessments with parachute inflation parameters, drag area, and reefing cutter timing used by CPAS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epstein, R.
1997-09-01
In inertial confinement fusion (ICF) experiments, irradiation uniformity is improved by passing laser beams through distributed phase plates (DPPs), which produce focused intensity profiles with well-controlled, reproducible envelopes modulated by fine random speckle. [C. B. Burckhardt, Appl. Opt. {bold 9}, 695 (1970); Y. Kato and K. Mima, Appl. Phys. B {bold 29}, 186 (1982); Y. Kato {ital et al.}, Phys. Rev. Lett. {bold 53}, 1057 (1984); Laboratory for Laser Energetics LLE Review 33, NTIS Document No. DOE/DP/40200-65, 1987 (unpublished), p. 1; Laboratory for Laser Energetics LLE Review 63, NTIS Document No. DOE/SF/19460-91, 1995 (unpublished), p. 1.] A uniformly ablating plasmamore » atmosphere acts to reduce the contribution of the speckle to the time-averaged irradiation nonuniformity by causing the intensity distribution to move relative to the absorption layer of the plasma. This occurs most directly as the absorption layer in the plasma moves with the ablation-driven flow, but it is shown that the effect of the accumulating ablated plasma on the phase of the laser light also makes a quantitatively significant contribution. Analytical results are obtained using the paraxial approximation applied to the beam propagation, and a simple statistical model is assumed for the properties of DPPs. The reduction in the time-averaged spatial spectrum of the speckle due to these effects is shown to be quantitatively significant within time intervals characteristic of atmospheric hydrodynamics under typical ICF irradiation intensities. {copyright} {ital 1997 American Institute of Physics.}« less
Miklós, István; Darling, Aaron E
2009-06-22
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.
Vitrac, Olivier; Leblanc, Jean-Charles
2007-02-01
A generic methodology for the assessment of consumer exposure to substances migrating from packaging materials into foodstuffs during storage is presented. Consumer exposure at the level of individual households is derived from the probabilistic modeling of the contamination of all packed food product units (e.g. yogurt pot, milk bottle, etc.) consumed by a given household over 1 year. Exposure of a given population is estimated by gathering the exposure distributions of individual households to suitable weights (conveniently, household sizes). Calculations are made by combining (i) an efficient resolution of migration models and (ii) a methodology utilizing different sources of uncertainty and variability. The full procedure was applied to the assessment of consumer exposure to styrene from yogurt pots based on yearly purchase data of more than 5400 households in France (about 2 million yogurt pots) and an initial concentration c0 of styrene in yogurt pot walls, which is assumed to be normally distributed with an average value of 500 mg kg-1 and a standard deviation of 150 mg kg-1. Results are discussed regarding both sensitivity of the migration model to boundary conditions and household practices. By assuming a partition coefficient of 1 and a Biot number of 100, the estimated median household exposure to styrene ranged between 1 and 35 microg day-1 person-1 (5th and 95th percentiles) with a likely value of 12 microg day-1 person-1 (50th percentile). It was found that exposure does not vary independently with the average consumption rate and contact times. Thus, falsely assuming a uniform contact time equal to the sell-by-date for all yogurts overestimates significantly the daily exposure (5th and 95th percentiles of 2 and 110 microg day-1 person-1, respectively) since high consumers showed quicker turnover of stock.
Improvement of illumination uniformity for LED flat panel light by using micro-secondary lens array.
Lee, Hsiao-Wen; Lin, Bor-Shyh
2012-11-05
LED flat panel light is an innovative lighting product in recent years. However, current flat panel light products still contain some drawbacks, such as narrow lighting areas and hot spots. In this study, a micro-secondary lens array technique was proposed and applied for the design of the light guide surface to improve the illumination uniformity. By using the micro-secondary lens array, the candela distribution of the LED flat panel light can be adjusted to similar to batwing distribution to improve the illumination uniformity. The experimental results show that the enhancement of the floor illumination uniformity is about 61%, and that of the wall illumination uniformity is about 20.5%.
NASA Astrophysics Data System (ADS)
Gogler, Slawomir; Bieszczad, Grzegorz; Krupinski, Michal
2013-10-01
Thermal imagers and used therein infrared array sensors are subject to calibration procedure and evaluation of their voltage sensitivity on incident radiation during manufacturing process. The calibration procedure is especially important in so-called radiometric cameras, where accurate radiometric quantities, given in physical units, are of concern. Even though non-radiometric cameras are not expected to stand up to such elevated standards, it is still important, that the image faithfully represents temperature variations across the scene. Detectors used in thermal camera are illuminated by infrared radiation transmitted through an infrared transmitting optical system. Often an optical system, when exposed to uniform Lambertian source forms a non-uniform irradiation distribution in its image plane. In order to be able to carry out an accurate non-uniformity correction it is essential to correctly predict irradiation distribution from a uniform source. In the article a non-uniformity correction method has been presented, that takes into account optical system's radiometry. Predictions of the irradiation distribution have been confronted with measured irradiance values. Presented radiometric model allows fast and accurate non-uniformity correction to be carried out.
Yuan, Cheng-song; Chen, Wan; Chen, Chen; Yang, Guang-hua; Hu, Chao; Tang, Kang-lai
2015-01-01
We investigated the effects on subtalar joint stress distribution after cannulated screw insertion at different positions and directions. After establishing a 3-dimensional geometric model of a normal subtalar joint, we analyzed the most ideal cannulated screw insertion position and approach for subtalar joint stress distribution and compared the differences in loading stress, antirotary strength, and anti-inversion/eversion strength among lateral-medial antiparallel screw insertion, traditional screw insertion, and ideal cannulated screw insertion. The screw insertion approach allowing the most uniform subtalar joint loading stress distribution was lateral screw insertion near the border of the talar neck plus medial screw insertion close to the ankle joint. For stress distribution uniformity, antirotary strength, and anti-inversion/eversion strength, lateral-medial antiparallel screw insertion was superior to traditional double-screw insertion. Compared with ideal cannulated screw insertion, slightly poorer stress distribution uniformity and better antirotary strength and anti-inversion/eversion strength were observed for lateral-medial antiparallel screw insertion. Traditional single-screw insertion was better than double-screw insertion for stress distribution uniformity but worse for anti-rotary strength and anti-inversion/eversion strength. Lateral-medial antiparallel screw insertion was slightly worse for stress distribution uniformity than was ideal cannulated screw insertion but superior to traditional screw insertion. It was better than both ideal cannulated screw insertion and traditional screw insertion for anti-rotary strength and anti-inversion/eversion strength. Lateral-medial antiparallel screw insertion is an approach with simple localization, convenient operation, and good safety. Copyright © 2015 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Circular, confined distribution for charged particle beams
Garnett, Robert W.; Dobelbower, M. Christian
1995-01-01
A charged particle beam line is formed with magnetic optics that manipulate the charged particle beam to form the beam having a generally rectangular configuration to a circular beam cross-section having a uniform particle distribution at a predetermined location. First magnetic optics form a charged particle beam to a generally uniform particle distribution over a square planar area at a known first location. Second magnetic optics receive the charged particle beam with the generally square configuration and affect the charged particle beam to output the charged particle beam with a phase-space distribution effective to fold corner portions of the beam toward the core region of the beam. The beam forms a circular configuration having a generally uniform spatial particle distribution over a target area at a predetermined second location.
Circular, confined distribution for charged particle beams
Garnett, R.W.; Dobelbower, M.C.
1995-11-21
A charged particle beam line is formed with magnetic optics that manipulate the charged particle beam to form the beam having a generally rectangular configuration to a circular beam cross-section having a uniform particle distribution at a predetermined location. First magnetic optics form a charged particle beam to a generally uniform particle distribution over a square planar area at a known first location. Second magnetic optics receive the charged particle beam with the generally square configuration and affect the charged particle beam to output the charged particle beam with a phase-space distribution effective to fold corner portions of the beam toward the core region of the beam. The beam forms a circular configuration having a generally uniform spatial particle distribution over a target area at a predetermined second location. 26 figs.
On the vertical distribution of water vapor in the Martian tropics
NASA Technical Reports Server (NTRS)
Haberle, Robert M.
1988-01-01
Although measurements of the column abundance of atmospheric water vapor on Mars have been made, measurements of its vertical distribution have not. How water is distributed in the vertical is fundamental to atmosphere-surface exchange processes, and especially to transport within the atmosphere. Several lines of evidence suggest that in the lowest several scale heights of the atmosphere, water vapor is nearly uniformly distributed. However, most of these arguments are suggestive rather than conclusive since they only demonstrate that the altitude to saturation is very high if the observed amount of water vapor is distributed uniformly. A simple argument is presented, independent of the saturation constraint, which suggests that in tropical regions, water vapor on Mars should be very nearly uniformly mixed on an annual and zonally averaged basis.
Brodsky, Leonid; Leontovich, Andrei; Shtutman, Michael; Feinstein, Elena
2004-01-01
Mathematical methods of analysis of microarray hybridizations deal with gene expression profiles as elementary units. However, some of these profiles do not reflect a biologically relevant transcriptional response, but rather stem from technical artifacts. Here, we describe two technically independent but rationally interconnected methods for identification of such artifactual profiles. Our diagnostics are based on detection of deviations from uniformity, which is assumed as the main underlying principle of microarray design. Method 1 is based on detection of non-uniformity of microarray distribution of printed genes that are clustered based on the similarity of their expression profiles. Method 2 is based on evaluation of the presence of gene-specific microarray spots within the slides’ areas characterized by an abnormal concentration of low/high differential expression values, which we define as ‘patterns of differentials’. Applying two novel algorithms, for nested clustering (method 1) and for pattern detection (method 2), we can make a dual estimation of the profile’s quality for almost every printed gene. Genes with artifactual profiles detected by method 1 may then be removed from further analysis. Suspicious differential expression values detected by method 2 may be either removed or weighted according to the probabilities of patterns that cover them, thus diminishing their input in any further data analysis. PMID:14999086
An approximate solution for interlaminar stresses in laminated composites: Applied mechanics program
NASA Technical Reports Server (NTRS)
Rose, Cheryl A.; Herakovich, Carl T.
1992-01-01
An approximate solution for interlaminar stresses in finite width, laminated composites subjected to uniform extensional, and bending loads is presented. The solution is based upon the principle of minimum complementary energy and an assumed, statically admissible stress state, derived by considering local material mismatch effects and global equilibrium requirements. The stresses in each layer are approximated by polynomial functions of the thickness coordinate, multiplied by combinations of exponential functions of the in-plane coordinate, expressed in terms of fourteen unknown decay parameters. Imposing the stationary condition of the laminate complementary energy with respect to the unknown variables yields a system of fourteen non-linear algebraic equations for the parameters. Newton's method is implemented to solve this system. Once the parameters are known, the stresses can be easily determined at any point in the laminate. Results are presented for through-thickness and interlaminar stress distributions for angle-ply, cross-ply (symmetric and unsymmetric laminates), and quasi-isotropic laminates subjected to uniform extension and bending. It is shown that the solution compares well with existing finite element solutions and represents an improved approximate solution for interlaminar stresses, primarily at interfaces where global equilibrium is satisfied by the in-plane stresses, but large local mismatch in properties requires the presence of interlaminar stresses.
A three-dimensional model of solar radiation transfer in a non-uniform plant canopy
NASA Astrophysics Data System (ADS)
Levashova, N. T.; Mukhartova, Yu V.
2018-01-01
A three-dimensional (3D) model of solar radiation transfer in a non-uniform plant canopy was developed. It is based on radiative transfer equations and a so-called turbid medium assumption. The model takes into account the multiple scattering contributions of plant elements in radiation fluxes. These enable more accurate descriptions of plant canopy reflectance and transmission in different spectral bands. The model was applied to assess the effects of plant canopy heterogeneity on solar radiation transmission and to quantify the difference in a radiation transfer between photosynthetically active radiation PAR (=0.39-0.72 μm) and near infrared solar radiation NIR (Δλ = 0.72-3.00 μm). Comparisons of the radiative transfer fluxes simulated by the 3D model within a plant canopy consisted of sparsely planted fruit trees (plant area index, PAI - 0.96 m2 m-2) with radiation fluxes simulated by a one-dimensional (1D) approach, assumed horizontal homogeneity of plant and leaf area distributions, showed that, for sunny weather conditions with a high solar elevation angle, an application of a simplified 1D approach can result in an underestimation of transmitted solar radiation by about 22% for PAR, and by about 26% for NIR.
Understanding a Normal Distribution of Data.
Maltenfort, Mitchell G
2015-12-01
Assuming data follow a normal distribution is essential for many common statistical tests. However, what are normal data and when can we assume that a data set follows this distribution? What can be done to analyze non-normal data?
Logical optimization for database uniformization
NASA Technical Reports Server (NTRS)
Grant, J.
1984-01-01
Data base uniformization refers to the building of a common user interface facility to support uniform access to any or all of a collection of distributed heterogeneous data bases. Such a system should enable a user, situated anywhere along a set of distributed data bases, to access all of the information in the data bases without having to learn the various data manipulation languages. Furthermore, such a system should leave intact the component data bases, and in particular, their already existing software. A survey of various aspects of the data bases uniformization problem and a proposed solution are presented.
Uniformity of LED light illumination in application to direct imaging lithography
NASA Astrophysics Data System (ADS)
Huang, Ting-Ming; Chang, Shenq-Tsong; Tsay, Ho-Lin; Hsu, Ming-Ying; Chen, Fong-Zhi
2016-09-01
Direct imaging has widely applied in lithography for a long time because of its simplicity and easy-maintenance. Although this method has limitation of lithography resolution, it is still adopted in industries. Uniformity of UV irradiance for a designed area is an important requirement. While mercury lamps were used as the light source in the early stage, LEDs have drawn a lot of attention for consideration from several aspects. Although LED has better and better performance, arrays of LEDs are required to obtain desired irradiance because of limitation of brightness for a single LED. Several effects are considered that affect the uniformity of UV irradiance such as alignment of optics, temperature of each LED, performance of each LED due to production uniformity, and pointing of LED module. Effects of these factors are considered to study the uniformity of LED Light Illumination. Numerical analysis is performed by assuming a serious of control factors to have a better understanding of each factor.
PATCHY BLAZAR HEATING: DIVERSIFYING THE THERMAL HISTORY OF THE INTERGALACTIC MEDIUM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamberts, Astrid; Chang, Philip; Pfrommer, Christoph
TeV-blazars potentially heat the intergalactic medium (IGM) as their gamma rays interact with photons of the extragalactic background light to produce electron–positron pairs, which lose their kinetic energy to the surrounding medium through plasma instabilities. This results in a heating mechanism that is only weakly sensitive to the local density, and therefore approximately spatially uniform, naturally producing an inverted temperature–density relation in underdense regions. In this paper we go beyond the approximation of uniform heating and quantify the heating rate fluctuations due to the clustered distribution of blazars and how this impacts the thermal history of the IGM. We analyticallymore » compute a filtering function that relates the heating rate fluctuations to the underlying dark matter density field. We implement it in the cosmological code GADGET-3 and perform large-scale simulations to determine the impact of inhomogeneous heating. We show that because of blazar clustering, blazar heating is inhomogeneous for z ≳ 2. At high redshift, the temperature–density relation shows an important scatter and presents a low temperature envelope of unheated regions, in particular at low densities and within voids. However, the median temperature of the IGM is close to that in the uniform case, albeit slightly lower at low redshift. We find that blazar heating is more complex than initially assumed and that the temperature–density relation is not unique. Our analytic model for the heating rate fluctuations couples well with large-scale simulations and provides a cost-effective alternative to subgrid models.« less
The NUONCE engine for LEO networks
NASA Technical Reports Server (NTRS)
Lo, Martin W.; Estabrook, Polly
1995-01-01
Typical LEO networks use constellations which provide a uniform coverage. However, the demand for telecom service is dynamic and unevenly distributed around the world. We examine a more efficient and cost effective design by matching the satellite coverage with the cyclical demand for service around the world. Our approach is to use a non-uniform satellite distribution for the network. We have named this constellation design NUONCE for Non Uniform Optimal Network Communications Engine.
Cooling water distribution system
Orr, Richard
1994-01-01
A passive containment cooling system for a nuclear reactor containment vessel. Disclosed is a cooling water distribution system for introducing cooling water by gravity uniformly over the outer surface of a steel containment vessel using an interconnected series of radial guide elements, a plurality of circumferential collector elements and collector boxes to collect and feed the cooling water into distribution channels extending along the curved surface of the steel containment vessel. The cooling water is uniformly distributed over the curved surface by a plurality of weirs in the distribution channels.
Mehta, Shraddha; Bastero-Caballero, Rowena F; Sun, Yijun; Zhu, Ray; Murphy, Diane K; Hardas, Bhushan; Koch, Gary
2018-04-29
Many published scale validation studies determine inter-rater reliability using the intra-class correlation coefficient (ICC). However, the use of this statistic must consider its advantages, limitations, and applicability. This paper evaluates how interaction of subject distribution, sample size, and levels of rater disagreement affects ICC and provides an approach for obtaining relevant ICC estimates under suboptimal conditions. Simulation results suggest that for a fixed number of subjects, ICC from the convex distribution is smaller than ICC for the uniform distribution, which in turn is smaller than ICC for the concave distribution. The variance component estimates also show that the dissimilarity of ICC among distributions is attributed to the study design (ie, distribution of subjects) component of subject variability and not the scale quality component of rater error variability. The dependency of ICC on the distribution of subjects makes it difficult to compare results across reliability studies. Hence, it is proposed that reliability studies should be designed using a uniform distribution of subjects because of the standardization it provides for representing objective disagreement. In the absence of uniform distribution, a sampling method is proposed to reduce the non-uniformity. In addition, as expected, high levels of disagreement result in low ICC, and when the type of distribution is fixed, any increase in the number of subjects beyond a moderately large specification such as n = 80 does not have a major impact on ICC. Copyright © 2018 John Wiley & Sons, Ltd.
Integrated-Circuit Pseudorandom-Number Generator
NASA Technical Reports Server (NTRS)
Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur
1992-01-01
Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.
Spatial effect of conical angle on optical-thermal distribution for circumferential photocoagulation
Truong, Van Gia; Park, Suhyun; Tran, Van Nam; Kang, Hyun Wook
2017-01-01
A uniformly diffusing applicator can be advantageous for laser treatment of tubular tissue. The current study investigated various conical angles for diffuser tips as a critical factor for achieving radially uniform light emission. A customized goniometer was employed to characterize the spatial uniformity of the light propagation. An ex vivo model was developed to quantitatively compare the temperature development and irreversible tissue coagulation. The 10-mm diffuser tip with angle at 25° achieved a uniform longitudinal intensity profile (i.e., 0.90 ± 0.07) as well as a consistent thermal denaturation on the tissue. The proposed conical angle can be instrumental in determining the uniformity of light distribution for the photothermal treatment of tubular tissue. PMID:29296495
NASA Technical Reports Server (NTRS)
Cady, E. C.
1977-01-01
A design analysis, is developed based on experimental data, to predict the effects of transient flow and pressure surges (caused either by valve or pump operation, or by boiling of liquids in warm lines) on the retention performance of screen acquisition systems. A survey of screen liquid acquisition system applications was performed to determine appropriate system environment and classification. A screen model was developed which assumed that the screen device was a uniformly distributed composite orthotropic structure, and which accounted for liquid inflow/outflow, gas ingestion quality, screen stress, and liquid spill. A series of 177 tests using 13 specimens (5 screen meshes, 4 screen device construction/backup methods, and 2 orientations) with three test fluids (isopropyl alcohol, Freon 114, and LH2) provided data which verified important features of the screen model and resulted in a design tool which could accurately predict the transient startup performance acquisition devices.
(3749) BALAM: A VERY YOUNG MULTIPLE ASTEROID SYSTEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vokrouhlicky, David, E-mail: vokrouhl@cesnet.c
2009-11-20
Binaries and multiple systems among small bodies in the solar system have received wide attention over the past decade. This is because their observations provide a wealth of data otherwise inaccessible for single objects. We use numerical integration to prove that the multiple asteroid system (3749) Balam is very young, in contrast to its previously assumed age of 0.5-1 Gyr related to the formation of the Flora family. This work is enabled by a fortuitous discovery of a paired component to (3749) Balam. We first show that the proximity of the (3749) Balam and 2009 BR60 orbits is not amore » statistical fluke of otherwise quasi-uniform distribution. Numerical integrations then strengthen the case and allow us to prove that 2009 BR60 separated from the Balam system less than a million years ago. This is the first time the age of a binary asteroid can be estimated with such accuracy.« less
Fast Inference with Min-Sum Matrix Product.
Felzenszwalb, Pedro F; McAuley, Julian J
2011-12-01
The MAP inference problem in many graphical models can be solved efficiently using a fast algorithm for computing min-sum products of n × n matrices. The class of models in question includes cyclic and skip-chain models that arise in many applications. Although the worst-case complexity of the min-sum product operation is not known to be much better than O(n(3)), an O(n(2.5)) expected time algorithm was recently given, subject to some constraints on the input matrices. In this paper, we give an algorithm that runs in O(n(2) log n) expected time, assuming that the entries in the input matrices are independent samples from a uniform distribution. We also show that two variants of our algorithm are quite fast for inputs that arise in several applications. This leads to significant performance gains over previous methods in applications within computer vision and natural language processing.
Interaction energy for a fullerene encapsulated in a carbon nanotorus
NASA Astrophysics Data System (ADS)
Sarapat, Pakhapoom; Baowan, Duangkamon; Hill, James M.
2018-06-01
The interaction energy of a fullerene symmetrically situated inside a carbon nanotorus is studied. For these non-bonded molecules, the main interaction originates from the van der Waals energy which is modelled by the 6-12 Lennard-Jones potential. Upon utilising the continuum approximation which assumes that there are infinitely many atoms that are uniformly distributed over the surfaces of the molecules, the total interaction energy between the two structures is obtained as a surface integral over the spherical and the toroidal surfaces. This analytical energy is employed to determine the most stable configuration of the torus encapsulating the fullerene. The results show that a torus with major radius around 20-22 Å and minor radius greater than 6.31 Å gives rise to the most stable arrangement. This study will pave the way for future developments in biomolecules design and drug delivery system.
Spiral microstrip hyperthermia applicators: technical design and clinical performance.
Samulski, T V; Fessenden, P; Lee, E R; Kapp, D S; Tanabe, E; McEuen, A
1990-01-01
Spiral microstrip microwave (MW) antennas have been developed and adapted for use as clinical hyperthermia applicators. The design has been configured in a variety of forms including single fixed antenna applicators, multi-element arrays, and mechanically scanned single or paired antennas. The latter three configurations have been used to allow an expansion of the effective heating area. Specific absorption rate (SAR) distributions measured in phantom have been used to estimate the depth and volume of effective heating. The estimates are made using the bioheat equation assuming uniformly perfused tissue. In excess of 500 treatments of patients with advanced or recurrent localized superficial tumors have been performed using this applicator technology. Data from clinical treatments have been analyzed to quantify the heating performance and verify the suitability of these applicators for clinical use. Good microwave coupling efficiency together with the compact applicator size have proved to be valuable clinical assets.
NASA Technical Reports Server (NTRS)
Stagliano, T. R.; Spilker, R. L.; Witmer, E. A.
1976-01-01
A user-oriented computer program CIVM-JET 4B is described to predict the large-deflection elastic-plastic structural responses of fragment impacted single-layer: (a) partial-ring fragment containment or deflector structure or (b) complete-ring fragment containment structure. These two types of structures may be either free or supported in various ways. Supports accommodated include: (1) point supports such as pinned-fixed, ideally-clamped, or supported by a structural branch simulating mounting-bracket structure and (2) elastic foundation support distributed over selected regions of the structure. The initial geometry of each partial or complete ring may be circular or arbitrarily curved; uniform or variable thicknesses of the structure are accommodated. The structural material is assumed to be initially isotropic; strain hardening and strain rate effects are taken into account.
Stability analysis of a liquid fuel annular combustion chamber. M.S. Thesis
NASA Technical Reports Server (NTRS)
Mcdonald, G. H.
1979-01-01
The problems of combustion instability in an annular combustion chamber are investigated. A modified Galerkin method was used to produce a set of modal amplitude equations from the general nonlinear partial differential acoustic wave equation. From these modal amplitude equations, the two variable perturbation method was used to develop a set of approximate equations of a given order of magnitude. These equations were modeled to show the effects of velocity sensitive combustion instabilities by evaluating the effects of certain parameters in the given set of equations. By evaluating these effects, parameters which cause instabilities to occur in the combustion chamber can be ascertained. It is assumed that in the annular combustion chamber, the liquid propellants are injected uniformly across the injector face, the combustion processes are distributed throughout the combustion chamber, and that no time delay occurs in the combustion processes.
NASA Astrophysics Data System (ADS)
Shiskova, I. N.; Kryukov, A. P.; Levashov, V. Yu
2017-11-01
The paper is devoted to research of the heat and mass transfer processes in liquid and vapor phase on the basis of the uniform approach assuming the through description of liquid, interface and vapor. Multiparticles interactions in liquid will be taken into account. The problem is studied when temperature in the depth of liquid differs from temperature in the vapor region. In this case there are both mass flux and heat flux. The study of influence of the correlations resulting from interactions of molecules set in thin near-surface liquid layers and an interface on intensity of evaporation is made. As a result of calculations the equilibrium line of the liquid-vapor saturation is obtained, which corresponds good enough with experimental data. Distributions of density, temperature, pressure, heat and mass fluxes, both in a liquid and in vapor are also presented.
The Structure of the Local Hot Bubble
NASA Technical Reports Server (NTRS)
Liu, W.; Chiao, M.; Collier, M. R.; Cravens, T.; Galeazzi, M.; Koutroumpa, D.; Kuntz, K. D.; Lallement, R.; Lepri, S. T.; McCammon, Dan;
2016-01-01
Diffuse X-rays from the Local Galaxy (DXL) is a sounding rocket mission designed to quantify and characterize the contribution of Solar Wind Charge eXchange (SWCX) to the Diffuse X-ray Background and study the properties of the Local Hot Bubble (LHB). Based on the results from the DXL mission, we quantified and removed the contribution of SWCX to the diffuse X-ray background measured by the ROSAT All Sky Survey. The cleaned maps were used to investigate the physical properties of the LHB. Assuming thermal ionization equilibrium, we measured a highly uniform temperature distributed around kT = 0.097 keV +/- 0.013 keV (FWHM) +/- 0.006 keV(systematic). We also generated a thermal emission measure map and used it to characterize the three-dimensional (3D) structure of the LHB, which we found to be in good agreement with the structure of the local cavity measured from dust and gas.
Time-dependent inhomogeneous jet models for BL Lac objects
NASA Technical Reports Server (NTRS)
Marlowe, A. T.; Urry, C. M.; George, I. M.
1992-01-01
Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.
Time-dependent inhomogeneous jet models for BL Lac objects
NASA Astrophysics Data System (ADS)
Marlowe, A. T.; Urry, C. M.; George, I. M.
1992-05-01
Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.
Effect of the tiger stripes on the deformation of Saturn's moon Enceladus
NASA Astrophysics Data System (ADS)
Souček, Ondřej; Hron, Jaroslav; Běhounková, Marie; Čadek, Ondřej
2016-07-01
Enceladus is a small icy moon of Saturn with active jets of water emanating from fractures around the south pole, informally called tiger stripes, which might be connected to a subsurface water ocean. The effect of these features on periodic tidal deformation of the moon has so far been neglected because of the difficulties associated with implementation of faults in continuum mechanics models. Here we estimate the maximum possible impact of the tiger stripes on tidal deformation and heat production within Enceladus's ice shell by representing them as narrow zones with negligible frictional and bulk resistance passing vertically through the whole ice shell. Assuming a uniform ice shell thickness of 25 km, consistent with the recent estimate of libration, we demonstrate that the faults can dramatically change the distribution of stress and strain in Enceladus's south polar region, leading to a significant increase of the heat production in this area.
Quantification of brain tissue through incorporation of partial volume effects
NASA Astrophysics Data System (ADS)
Gage, Howard D.; Santago, Peter, II; Snyder, Wesley E.
1992-06-01
This research addresses the problem of automatically quantifying the various types of brain tissue, CSF, white matter, and gray matter, using T1-weighted magnetic resonance images. The method employs a statistical model of the noise and partial volume effect and fits the derived probability density function to that of the data. Following this fit, the optimal decision points can be found for the materials and thus they can be quantified. Emphasis is placed on repeatable results for which a confidence in the solution might be measured. Results are presented assuming a single Gaussian noise source and a uniform distribution of partial volume pixels for both simulated and actual data. Thus far results have been mixed, with no clear advantage being shown in taking into account partial volume effects. Due to the fitting problem being ill-conditioned, it is not yet clear whether these results are due to problems with the model or the method of solution.
One-dimensional nonlinear theory for rectangular helix traveling-wave tube
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Chengfang, E-mail: fchffchf@126.com; Zhao, Bo; Yang, Yudong
A 1-D nonlinear theory of a rectangular helix traveling-wave tube (TWT) interacting with a ribbon beam is presented in this paper. The RF field is modeled by a transmission line equivalent circuit, the ribbon beam is divided into a sequence of thin rectangular electron discs with the same cross section as the beam, and the charges are assumed to be uniformly distributed over these discs. Then a method of computing the space-charge field by solving Green's Function in the Cartesian Coordinate-system is fully described. Nonlinear partial differential equations for field amplitudes and Lorentz force equations for particles are solved numericallymore » using the fourth-order Runge-Kutta technique. The tube's gain, output power, and efficiency of the above TWT are computed. The results show that increasing the cross section of the ribbon beam will improve a rectangular helix TWT's efficiency and reduce the saturated length.« less
A tuneable approach to uniform light distribution for artificial daylight photodynamic therapy.
O'Mahoney, Paul; Haigh, Neil; Wood, Kenny; Brown, C Tom A; Ibbotson, Sally; Eadie, Ewan
2018-06-16
Implementation of daylight photodynamic therapy (dPDT) is somewhat limited by variable weather conditions. Light sources have been employed to provide artificial dPDT indoors, with low irradiances and longer treatment times. Uniform light distribution across the target area is key to ensuring effective treatment, particularly for large areas. A novel light source is developed with tuneable direction of light emission in order to meet this challenge. Wavelength composition of the novel light source is controlled such that the protoporphyrin-IX (PpIX) weighed spectra of both the light source and daylight match. The uniformity of the light source is characterised on a flat surface, a model head and a model leg. For context, a typical conventional PDT light source is also characterised. Additionally, the wavelength uniformity across the treatment site is characterised. The PpIX-weighted spectrum of the novel light source matches with PpIX-weighted daylight spectrum, with irradiance values within the bounds for effective dPDT. By tuning the direction of light emission, improvements are seen in the uniformity across large anatomical surfaces. Wavelength uniformity is discussed. We have developed a light source that addresses the challenges in uniform, multiwavelength light distribution for large area artificial dPDT across curved anatomical surfaces. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Tsuchiizu, Masahisa; Kawaguchi, Kouki; Yamakawa, Youichi; Kontani, Hiroshi
2018-04-01
Recently, complex rotational symmetry-breaking phenomena have been discovered experimentally in cuprate superconductors. To find the realized order parameters, we study various unconventional charge susceptibilities in an unbiased way by applying the functional-renormalization-group method to the d -p Hubbard model. Without assuming the wave vector of the order parameter, we reveal that the most dominant instability is the uniform (q =0 ) charge modulation on the px and py orbitals, which possesses d symmetry. This uniform nematic order triggers another nematic p -orbital density wave along the axial (Cu-Cu) direction at Qa≈(π /2 ,0 ) . It is predicted that uniform nematic order is driven by the spin fluctuations in the pseudogap region, and another nematic density-wave order at q =Qa is triggered by the uniform order. The predicted multistage nematic transitions are caused by Aslamazov-Larkin-type fluctuation-exchange processes.
Comparison of crop stress and soil maps to enhance variable rate irrigation prescriptions
USDA-ARS?s Scientific Manuscript database
Soil textural variability within many irrigated fields diminishes the effectiveness of conventional irrigation management, and scheduling methods that assume uniform soil conditions may produce less than satisfactory results. Furthermore, benefits of variable-rate application of agrochemicals, seeds...
China Report, Economic Affairs
1986-04-23
uniformity" otherwise the scope control will not be effective. If we want to strengthen macroeconomic control and to stimulate the microeconomy ...management results of microeconomy . They blindly pursue the starting of projects but fail to consider or assume responsibility for the "ending" of projects
Optimized multisectioned acoustic liners
NASA Technical Reports Server (NTRS)
Baumeister, K. J.
1979-01-01
New calculations show that segmenting is most efficient at high frequencies with relatively long duct lengths where the attenuation is low for both uniform and segmented liners. Statistical considerations indicate little advantage in using optimized liners with more than two segments while the bandwidth of an optimized two-segment liner is shown to be nearly equal to that of a uniform liner. Multielement liner calculations show a large degradation in performance due to changes in assumed input modal structure. Computer programs are used to generate theoretical attenuations for a number of liner configurations for liners in a rectangular duct with no mean flow. Overall, the use of optimized multisectioned liners fails to offer sufficient advantage over a uniform liner to warrant their use except in low frequency single mode application.
Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni
2017-10-01
Colors are rarely uniform, yet little is known about how people represent color distributions. We introduce a new method for studying color ensembles based on intertrial learning in visual search. Participants looked for an oddly colored diamond among diamonds with colors taken from either uniform or Gaussian color distributions. On test trials, the targets had various distances in feature space from the mean of the preceding distractor color distribution. Targets on test trials therefore served as probes into probabilistic representations of distractor colors. Test-trial response times revealed a striking similarity between the physical distribution of colors and their internal representations. The results demonstrate that the visual system represents color ensembles in a more detailed way than previously thought, coding not only mean and variance but, most surprisingly, the actual shape (uniform or Gaussian) of the distribution of colors in the environment.
Kanagawa, Tetsuya
2015-05-01
This paper theoretically treats the weakly nonlinear propagation of diffracted sound beams in nonuniform bubbly liquids. The spatial distribution of the number density of the bubbles, initially in a quiescent state, is assumed to be a slowly varying function of the spatial coordinates; the amplitude of variation is assumed to be small compared to the mean number density. A previous derivation method of nonlinear wave equations for plane progressive waves in uniform bubbly liquids [Kanagawa, Yano, Watanabe, and Fujikawa (2010). J. Fluid Sci. Technol. 5(3), 351-369] is extended to handle quasi-plane beams in weakly nonuniform bubbly liquids. The diffraction effect is incorporated by adding a relation that scales the circular sound source diameter to the wavelength into the original set of scaling relations composed of nondimensional physical parameters. A set of basic equations for bubbly flows is composed of the averaged equations of mass and momentum, the Keller equation for bubble wall, and supplementary equations. As a result, two types of evolution equations, a nonlinear Schrödinger equation including dissipation, diffraction, and nonuniform effects for high-frequency short-wavelength case, and a Khokhlov-Zabolotskaya-Kuznetsov equation including dispersion and nonuniform effects for low-frequency long-wavelength case, are derived from the basic set.
Electrically-induced stresses and deflection in multiple plates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Jih-Perng; Tichler, P.R.
Thermohydraulic tests are being planned at the High Flux Beam Reactor of Brookhaven National Laboratory, in which direct electrical heating of metal plates will simulate decay heating in parallel plate-type fuel elements. The required currents are high if plates are made of metal with a low electrical resistance, such as aluminum. These high currents will induce either attractive or repulsive forces between adjacent current-carrying plates. Such forces, if strong enough, will cause the plates to deflect and so change the geometry of the coolant channel between the plates. Since this is undesirable, an analysis has been made to evaluate themore » magnitude of the deflection and related stresses. In contrast to earlier publications in which either a concentrated or a uniform load was assumed, in this paper an exact force distribution on the plate is analytically solved and then used for stress and deflection calculations, assuming each plate to be a simply supported beam. Results indicate that due to superposition of the induced forces between plates in a multiple-and-parallel plate array, the maximum deflection and bending stress occur at the midpoint of the outermost plate. The maximum shear stress, which is inversely proportional to plate thickness, occurs at both ends of the outermost plate.« less
NASA Astrophysics Data System (ADS)
Rodrigues, Neil S.; Kulkarni, Varun; Sojka, Paul E.
2014-11-01
While like-on-like doublet impinging jet atomization has been extensively studied in the literature, there is poor agreement between experimentally observed spray characteristics and theoretical predictions (Ryan et al. 1995, Anderson et al. 2006). Recent works (Bremond and Villermaux 2006, Choo and Kang 2007) have introduced a non-uniform jet velocity profile, which lead to a deviation from the standard assumptions for the sheet velocity and the sheet thickness parameter. These works have assumed a parabolic profile to serve as another limit to the traditional uniform jet velocity profile assumption. Incorporating a non-uniform jet velocity profile results in the sheet velocity and the sheet thickness parameter depending on the sheet azimuthal angle. In this work, the 1/7th power-law turbulent velocity profile is assumed to provide a closer match to the flow behavior of jets at high Reynolds and Weber numbers, which correspond to the impact wave regime. Predictions for the maximum wavelength, sheet breakup length, ligament diameter, and drop diameter are compared with experimental observations. The results demonstrate better agreement between experimentally measured values and predictions, compared to previous models. U.S. Army Research Office under the Multi-University Research Initiative Grant Number W911NF-08-1-0171.
Self-consistent average-atom scheme for electronic structure of hot and dense plasmas of mixture.
Yuan, Jianmin
2002-10-01
An average-atom model is proposed to treat the electronic structures of hot and dense plasmas of mixture. It is assumed that the electron density consists of two parts. The first one is a uniform distribution with a constant value, which is equal to the electron density at the boundaries between the atoms. The second one is the total electron density minus the first constant distribution. The volume of each kind of atom is proportional to the sum of the charges of the second electron part and of the nucleus within each atomic sphere. By this way, one can make sure that electrical neutrality is satisfied within each atomic sphere. Because the integration of the electron charge within each atom needs the size of that atom in advance, the calculation is carried out in a usual self-consistent way. The occupation numbers of electron on the orbitals of each kind of atom are determined by the Fermi-Dirac distribution with the same chemical potential for all kinds of atoms. The wave functions and the orbital energies are calculated with the Dirac-Slater equations. As examples, the electronic structures of the mixture of Au and Cd, water (H2O), and CO2 at a few temperatures and densities are presented.
High density, uniformly distributed W/UO2 for use in Nuclear Thermal Propulsion
NASA Astrophysics Data System (ADS)
Tucker, Dennis S.; Barnes, Marvin W.; Hone, Lance; Cook, Steven
2017-04-01
An inexpensive, quick method has been developed to obtain uniform distributions of UO2 particles in a tungsten matrix utilizing 0.5 wt percent low density polyethylene. Powders were sintered in a Spark Plasma Sintering (SPS) furnace at 1600 °C, 1700 °C, 1750 °C, 1800 °C and 1850 °C using a modified sintering profile. This resulted in a uniform distribution of UO2 particles in a tungsten matrix with high densities, reaching 99.46% of theoretical for the sample sintered at 1850 °C. The powder process is described and the results of this study are given below.
Effects of physical and chemical heterogeneity on water-quality samples obtained from wells
Reilly, Thomas E.; Gibs, Jacob
1993-01-01
Factors that affect the mass of chemical constituents entering a well include the distributions of flow rate and chemical concentrations along and near the screened or open section of the well. Assuming a layered porous medium (with each layer being characterized by a uniform hydraulic conductivity and chemical concentration), a knowledge of the flow from each layer along the screened zone and of the chemical concentrations in each layer enables the total mass entering the well to be determined. Analyses of hypothetical systems and a site at Galloway, NJ, provide insight into the temporal variation of water-quality data observed when withdrawing water from screened wells in heterogeneous ground-water systems.The analyses of hypothetical systems quantitatively indicate the cause-and-effect relations that cause temporal variability in water samples obtained from wells. Chemical constituents that have relatively uniform concentrations with depth may not show variations in concentrations in the water discharged from a well after the well is purged (evacuation of standing water in the well casing). However, chemical constituents that do not have uniform concentrations near the screened interval of the well may show variations in concentrations in the well discharge water after purging because of the physics of ground-water flow in the vicinity of the screen.Water-quality samples were obtained through time over a 30 minute period from a site at Galloway, NJ. The water samples were analyzed for aromatic hydrocarbons, and the data for benzene, toluene, and meta+para xylene were evaluated for temporal variations. Samples were taken from seven discrete zones, and the flow-weighted concentrations of benzene, toluene, and meta+para xylene all indicate an increase in concentration over time during pumping. These observed trends in time were reproduced numerically based on the estimated concentration distribution in the aquifer and the flow rates from each zone.The results of the hypothetical numerical experiments and the analysis of the field data both corroborate the impact of physical and chemical heterogeneity in the aquifer on water-quality samples obtained from wells. If temporal variations in concentrations of chemical constituents are observed, they may indicate variability in the ground-water system being sampled, which may give insight into the chemical distributions within the aquifer and provide guidance in the positioning of new sampling devices or wells.
Preparedness for pandemics: does variation among states affect the nation as a whole?
Potter, Margaret A; Brown, Shawn T; Lee, Bruce Y; Grefenstette, John; Keane, Christopher R; Lin, Chyongchiou J; Quinn, Sandra C; Stebbins, Samuel; Sweeney, Patricia M; Burke, Donald S
2012-01-01
Since states' public health systems differ as to pandemic preparedness, this study explored whether such heterogeneity among states could affect the nation's overall influenza rate. The Centers for Disease Control and Prevention produced a uniform set of scores on a 100-point scale from its 2008 national evaluation of state preparedness to distribute materiel from the Strategic National Stockpile (SNS). This study used these SNS scores to represent each state's relative preparedness to distribute influenza vaccine in a timely manner and assumed that "optimal" vaccine distribution would reach at least 35% of the state's population within 4 weeks. The scores were used to determine the timing of vaccine distribution for each state: each 10-point decrement of score below 90 added an additional delay increment to the distribution time. A large-scale agent-based computational model simulated an influenza pandemic in the US population. In this synthetic population each individual or agent had an assigned household, age, workplace or school destination, daily commute, and domestic intercity air travel patterns. Simulations compared influenza case rates both nationally and at the state level under 3 scenarios: no vaccine distribution (baseline), optimal vaccine distribution in all states, and vaccine distribution time modified according to state-specific SNS score. Between optimal and SNS-modified scenarios, attack rates rose not only in low-scoring states but also in high-scoring states, demonstrating an interstate spread of infections. Influenza rates were sensitive to variation of the SNS-modified scenario (delay increments of 1 day versus 5 days), but the interstate effect remained. The effectiveness of a response activity such as vaccine distribution could benefit from national standards and preparedness funding allocated in part to minimize interstate disparities.
NASA Astrophysics Data System (ADS)
Zhu, Wei; Udalski, A.; Calchi Novati, S.; Chung, S.-J.; Jung, Y. K.; Ryu, Y.-H.; Shin, I.-G.; Gould, A.; Lee, C.-U.; Albrow, M. D.; Yee, J. C.; Han, C.; Hwang, K.-H.; Cha, S.-M.; Kim, D.-J.; Kim, H.-W.; Kim, S.-L.; Kim, Y.-H.; Lee, Y.; Park, B.-G.; Pogge, R. W.; KMTNet Collaboration; Poleski, R.; Mróz, P.; Pietrukowicz, P.; Skowron, J.; Szymański, M. K.; KozLowski, S.; Ulaczyk, K.; Pawlak, M.; OGLE Collaboration; Beichman, C.; Bryden, G.; Carey, S.; Fausnaugh, M.; Gaudi, B. S.; Henderson, C. B.; Shvartzvald, Y.; Wibking, B.; Spitzer Team
2017-11-01
We analyze an ensemble of microlensing events from the 2015 Spitzer microlensing campaign, all of which were densely monitored by ground-based high-cadence survey teams. The simultaneous observations from Spitzer and the ground yield measurements of the microlensing parallax vector {{\\boldsymbol{π }}}{{E}}, from which compact constraints on the microlens properties are derived, including ≲25% uncertainties on the lens mass and distance. With the current sample, we demonstrate that the majority of microlenses are indeed in the mass range of M dwarfs. The planet sensitivities of all 41 events in the sample are calculated, from which we provide constraints on the planet distribution function. In particular, assuming a planet distribution function that is uniform in {log}q, where q is the planet-to-star mass ratio, we find a 95% upper limit on the fraction of stars that host typical microlensing planets of 49%, which is consistent with previous studies. Based on this planet-free sample, we develop the methodology to statistically study the Galactic distribution of planets using microlensing parallax measurements. Under the assumption that the planet distributions are the same in the bulge as in the disk, we predict that ∼1/3 of all planet detections from the microlensing campaigns with Spitzer should be in the bulge. This prediction will be tested with a much larger sample, and deviations from it can be used to constrain the abundance of planets in the bulge relative to the disk.
Mechanism of formation and spatial distribution of lead atoms in quartz tube atomizers
NASA Astrophysics Data System (ADS)
Johansson, M.; Baxter, D. C.; Ohlsson, K. E. A.; Frech, W.
1997-05-01
The cross-sectional and longitudinal spatial distributions of lead atoms in a quartz tube (QT) atomizers coupled to a gas chromatograph have been investigated. A uniform analyte atom distribution over the cross-section was found in a QT having an inner diameter (i.d.) of 7 mm, whereas a 10 mm i.d. QT showed an inhomogeneous distribution. These results accentuate the importance of using QTs with i.d.s below 10 mm to fulfil the prerequirement of the Beer—Lambert law to avoid bent calibration curves. The influence of the make up gas on the formation of lead atoms from alkyllead compounds has been studied, and carbon monoxide was found equally efficient in promoting free atom formation as hydrogen. This suggests that hydrogen radicals are not essential for mediating the atomization of alkyllead in QT atomizers at ˜ 1200 K. Furthermore, thermodynamic equilibrium calculations describing the investigated system were performed supporting the experimental results. Based on the presented data, a mechanism for free lead atom formation in continuously heated QT atomizers is proposed; thermal atomization occurs under thermodynamic equilibrium conditions in a reducing gas. The longitudinal atom distribution has been further investigated applying other make up gases, N 2 and He. These results show the effect of the influx of atmospheric oxygen on the free lead atom formation. Calculations of the partial pressure of oxygen in the atomizer gas phase assuming thermodynamic equilibrium have been undertaken using a convective-diffusional model.
Stability of equidimensional pseudo-single-domain magnetite over billion-year timescales.
Nagy, Lesleis; Williams, Wyn; Muxworthy, Adrian R; Fabian, Karl; Almeida, Trevor P; Conbhuí, Pádraig Ó; Shcherbakov, Valera P
2017-09-26
Interpretations of paleomagnetic observations assume that naturally occurring magnetic particles can retain their primary magnetic recording over billions of years. The ability to retain a magnetic recording is inferred from laboratory measurements, where heating causes demagnetization on the order of seconds. The theoretical basis for this inference comes from previous models that assume only the existence of small, uniformly magnetized particles, whereas the carriers of paleomagnetic signals in rocks are usually larger, nonuniformly magnetized particles, for which there is no empirically complete, thermally activated model. This study has developed a thermally activated numerical micromagnetic model that can quantitatively determine the energy barriers between stable states in nonuniform magnetic particles on geological timescales. We examine in detail the thermal stability characteristics of equidimensional cuboctahedral magnetite and find that, contrary to previously published theories, such nonuniformly magnetized particles provide greater magnetic stability than their uniformly magnetized counterparts. Hence, nonuniformly magnetized grains, which are commonly the main remanence carrier in meteorites and rocks, can record and retain high-fidelity magnetic recordings over billions of years.
Price dynamics and market power in an agent-based power exchange
NASA Astrophysics Data System (ADS)
Cincotti, Silvano; Guerci, Eric; Raberto, Marco
2005-05-01
This paper presents an agent-based model of a power exchange. Supply of electric power is provided by competing generating companies, whereas demand is assumed to be inelastic with respect to price and is constant over time. The transmission network topology is assumed to be a fully connected graph and no transmission constraints are taken into account. The price formation process follows a common scheme for real power exchanges: a clearing house mechanism with uniform price, i.e., with price set equal across all matched buyer-seller pairs. A single class of generating companies is considered, characterized by linear cost function for each technology. Generating companies compete for the sale of electricity through repeated rounds of the uniform auction and determine their supply functions according to production costs. However, an individual reinforcement learning algorithm characterizes generating companies behaviors in order to attain the expected maximum possible profit in each auction round. The paper investigates how the market competitive equilibrium is affected by market microstructure and production costs.
Passive containment cooling water distribution device
Conway, Lawrence E.; Fanto, Susan V.
1994-01-01
A passive containment cooling system for a nuclear reactor containment vessel. Disclosed is a cooling water distribution system for introducing cooling water by gravity uniformly over the outer surface of a steel containment vessel using a series of radial guide elements and cascading weir boxes to collect and then distribute the cooling water into a series of distribution areas through a plurality of cascading weirs. The cooling water is then uniformly distributed over the curved surface by a plurality of weir notches in the face plate of the weir box.
NASA Astrophysics Data System (ADS)
Molnar, I. L.; O'Carroll, D. M.; Gerhard, J.; Willson, C. S.
2014-12-01
The recent success in using Synchrotron X-ray Computed Microtomography (SXCMT) for the quantification of nanoparticle concentrations within real, three-dimensional pore networks [1] has opened up new opportunities for collecting experimental data of pore-scale flow and transport processes. One opportunity is coupling SXCMT with nanoparticle/soil transport experiments to provide unique insights into how pore-scale processes influence transport at larger scales. Understanding these processes is a key step in accurately upscaling micron-scale phenomena to the continuum-scale. Upscaling phenomena from the micron-scale to the continuum-scale typically involves the assumption that the pore space is well mixed. Using this 'well mixed assumption' it is implicitly assumed that the distribution of nanoparticles within the pore does not affect its retention by soil grains. This assumption enables the use of volume-averaged parameters in calculating transport and retention rates. However, in some scenarios, the well mixed assumption will likely be violated by processes such as deposition and diffusion. These processes can alter the distribution of the nanoparticles in the pore space and impact retention behaviour, leading to discrepancies between theoretical predictions and experimental observations. This work investigates the well mixed assumption by employing SXCMT to experimentally examine pore-scale mixing of silver nanoparticles during transport through sand packed columns. Silver nanoparticles were flushed through three different sands to examine the impact of grain distribution and nanoparticle retention rates on mixing: uniform silica (low retention), well graded silica sand (low retention) and uniform iron oxide coated silica sand (high retention). The SXCMT data identified diffusion-limited retention as responsible for violations of the well mixed assumption. A mathematical description of the diffusion-limited retention process was created and compared to the experimental data at the pore and column-scale. The mathematical description accurately predicted trends observed within the SXCMT-datasets such as concentration gradients away from grain surfaces and also accurately predicted total retention of nanoparticles at the column scale. 1. ES&T 2014, 48, (2), 1114-1122.
Charge-Spot Model for Electrostatic Forces in Simulation of Fine Particulates
NASA Technical Reports Server (NTRS)
Walton, Otis R.; Johnson, Scott M.
2010-01-01
The charge-spot technique for modeling the static electric forces acting between charged fine particles entails treating electric charges on individual particles as small sets of discrete point charges, located near their surfaces. This is in contrast to existing models, which assume a single charge per particle. The charge-spot technique more accurately describes the forces, torques, and moments that act on triboelectrically charged particles, especially image-charge forces acting near conducting surfaces. The discrete element method (DEM) simulation uses a truncation range to limit the number of near-neighbor charge spots via a shifted and truncated potential Coulomb interaction. The model can be readily adapted to account for induced dipoles in uncharged particles (and thus dielectrophoretic forces) by allowing two charge spots of opposite signs to be created in response to an external electric field. To account for virtual overlap during contacts, the model can be set to automatically scale down the effective charge in proportion to the amount of virtual overlap of the charge spots. This can be accomplished by mimicking the behavior of two real overlapping spherical charge clouds, or with other approximate forms. The charge-spot method much more closely resembles real non-uniform surface charge distributions that result from tribocharging than simpler approaches, which just assign a single total charge to a particle. With the charge-spot model, a single particle may have a zero net charge, but still have both positive and negative charge spots, which could produce substantial forces on the particle when it is close to other charges, when it is in an external electric field, or when near a conducting surface. Since the charge-spot model can contain any number of charges per particle, can be used with only one or two charge spots per particle for simulating charging from solar wind bombardment, or with several charge spots for simulating triboelectric charging. Adhesive image-charge forces acting on charged particles touching conducting surfaces can be up to 50 times stronger if the charge is located in discrete spots on the particle surface instead of being distributed uniformly over the surface of the particle, as is assumed by most other models. Besides being useful in modeling particulates in space and distant objects, this modeling technique is useful for electrophotography (used in copiers) and in simulating the effects of static charge in the pulmonary delivery of fine dry powders.
Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei
2015-01-01
A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.
NASA Astrophysics Data System (ADS)
Wattanasakulpong, Nuttawit; Chaikittiratana, Arisara; Pornpeerakeat, Sacharuck
2018-06-01
In this paper, vibration analysis of functionally graded porous beams is carried out using the third-order shear deformation theory. The beams have uniform and non-uniform porosity distributions across their thickness and both ends are supported by rotational and translational springs. The material properties of the beams such as elastic moduli and mass density can be related to the porosity and mass coefficient utilizing the typical mechanical features of open-cell metal foams. The Chebyshev collocation method is applied to solve the governing equations derived from Hamilton's principle, which is used in order to obtain the accurate natural frequencies for the vibration problem of beams with various general and elastic boundary conditions. Based on the numerical experiments, it is revealed that the natural frequencies of the beams with asymmetric and non-uniform porosity distributions are higher than those of other beams with uniform and symmetric porosity distributions.
Microhabitats reduce animal's exposure to climate extremes.
Scheffers, Brett R; Edwards, David P; Diesmos, Arvin; Williams, Stephen E; Evans, Theodore A
2014-02-01
Extreme weather events, such as unusually hot or dry conditions, can cause death by exceeding physiological limits, and so cause loss of population. Survival will depend on whether or not susceptible organisms can find refuges that buffer extreme conditions. Microhabitats offer different microclimates to those found within the wider ecosystem, but do these microhabitats effectively buffer extreme climate events relative to the physiological requirements of the animals that frequent them? We collected temperature data from four common microhabitats (soil, tree holes, epiphytes, and vegetation) located from the ground to canopy in primary rainforests in the Philippines. Ambient temperatures were monitored from outside of each microhabitat and from the upper forest canopy, which represent our macrohabitat controls. We measured the critical thermal maxima (CTmax ) of frog and lizard species, which are thermally sensitive and inhabit our microhabitats. Microhabitats reduced mean temperature by 1-2 °C and reduced the duration of extreme temperature exposure by 14-31 times. Microhabitat temperatures were below the CTmax of inhabitant frogs and lizards, whereas macrohabitats consistently contained lethal temperatures. Microhabitat temperatures increased by 0.11-0.66 °C for every 1 °C increase in macrohabitat temperature, and this nonuniformity in temperature change influenced our forecasts of vulnerability for animal communities under climate change. Assuming uniform increases of 6 °C, microhabitats decreased the vulnerability of communities by up to 32-fold, whereas under nonuniform increases of 0.66 to 3.96 °C, microhabitats decreased the vulnerability of communities by up to 108-fold. Microhabitats have extraordinary potential to buffer climate and likely reduce mortality during extreme climate events. These results suggest that predicted changes in distribution due to mortality and habitat shifts that are derived from macroclimatic samples and that assume uniform changes in microclimates relative to macroclimates may be overly pessimistic. Nevertheless, even nonuniform temperature increases within buffered microhabitats would still threaten frogs and lizards. © 2013 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Jing, Haiquan; He, Xuhui; Zou, Yunfeng; Wang, Hanfeng
2018-03-01
Stay cables are important load-bearing structural elements of cable-stayed bridges. Suppressing the large vibrations of the stay cables under the external excitations is of worldwide concern for the bridge engineers and researchers. Over the past decade, the use of crosstie has become one of the most practical and effective methods. Extensive research has led to a better understanding of the mechanics of cable networks, and the effects of different parameters, such as length ratio, mass-tension ratio, and segment ratio on the effectiveness of the crosstie have been investigated. In this study, uniformly distributed elastic crossties serve to replace the traditional single, or several cross-ties, aiming to delay "mode localization." A numerical method is developed by replacing the uniformly distributed, discrete elastic cross-tie model with an equivalent, continuously distributed, elastic cross-tie model in order to calculate the modal frequencies and mode shapes of the cable-crosstie system. The effectiveness of the proposed method is verified by comparing the elicited results with those obtained using the previous method. The uniformly distributed elastic cross-ties are shown to significantly delay "mode localization."
Darling, Aaron E.
2009-01-01
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186
Govoni, V; Della Coletta, E; Cesnik, E; Casetta, I; Tugnoli, V; Granieri, E
2015-04-01
An ecological study in the resident population of the Health District (HD) of Ferrara, Italy, has been carried out to establish the distribution in space and time of the amyotrophic lateral sclerosis (ALS) incident cases according to the disease onset type and gender in the period 1964-2009. The hypothesis of a uniform distribution was assumed. The incident cases of spinal onset ALS and bulbar onset ALS were evenly distributed in space and time in both men and women. The spinal onset ALS incident cases distribution according to gender was significantly different from the expected in the extra-urban population (20 observed cases in men 95% Poisson confidence interval 12.22-30.89, expected cases in men 12.19; six observed cases in women 95% Poisson confidence interval 2.20-13.06, expected cases in women 13.81), whereas no difference was found in the urban population. The spinal onset ALS incidence was higher in men than in women in the extra-urban population (difference between the rates = 1.53, 95% CI associated with the difference 0.52-2.54), whereas no difference between sexes was found in the urban population. The uneven distribution according to gender of the spinal onset ALS incident cases only in the extra-urban population suggests the involvement of a gender related environmental risk factor associated with the extra-urban environment. Despite some limits of the spatial analysis in the study of rare diseases, the results appear consistent with the literature data. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Factoring out nondecision time in choice reaction time data: Theory and implications.
Verdonck, Stijn; Tuerlinckx, Francis
2016-03-01
Choice reaction time (RT) experiments are an invaluable tool in psychology and neuroscience. A common assumption is that the total choice response time is the sum of a decision and a nondecision part (time spent on perceptual and motor processes). While the decision part is typically modeled very carefully (commonly with diffusion models), a simple and ad hoc distribution (mostly uniform) is assumed for the nondecision component. Nevertheless, it has been shown that the misspecification of the nondecision time can severely distort the decision model parameter estimates. In this article, we propose an alternative approach to the estimation of choice RT models that elegantly bypasses the specification of the nondecision time distribution by means of an unconventional convolution of data and decision model distributions (hence called the D*M approach). Once the decision model parameters have been estimated, it is possible to compute a nonparametric estimate of the nondecision time distribution. The technique is tested on simulated data, and is shown to systematically remove traditional estimation bias related to misspecified nondecision time, even for a relatively small number of observations. The shape of the actual underlying nondecision time distribution can also be recovered. Next, the D*M approach is applied to a selection of existing diffusion model application articles. For all of these studies, substantial quantitative differences with the original analyses are found. For one study, these differences radically alter its final conclusions, underlining the importance of our approach. Additionally, we find that strongly right skewed nondecision time distributions are not at all uncommon. (c) 2016 APA, all rights reserved).
Stacked waveguide reactors with gradient embedded scatterers for high-capacity water cleaning
Ahsan, Syed Saad; Gumus, Abdurrahman; Erickson, David
2015-11-04
We present a compact water-cleaning reactor with stacked layers of waveguides containing gradient patterns of optical scatterers that enable uniform light distribution and augmented water-cleaning rates. Previous photocatalytic reactors using immersion, external, or distributive lamps suffer from poor light distribution that impedes scalability. Here, we use an external UV-source to direct photons into stacked waveguide reactors where we scatter the photons uniformly over the length of the waveguide to thin films of TiO 2-catalysts. In conclusion, we also show 4.5 times improvement in activity over uniform scatterer designs, demonstrate a degradation of 67% of the organic dye, and characterize themore » degradation rate constant.« less
Stacked waveguide reactors with gradient embedded scatterers for high-capacity water cleaning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahsan, Syed Saad; Gumus, Abdurrahman; Erickson, David
We present a compact water-cleaning reactor with stacked layers of waveguides containing gradient patterns of optical scatterers that enable uniform light distribution and augmented water-cleaning rates. Previous photocatalytic reactors using immersion, external, or distributive lamps suffer from poor light distribution that impedes scalability. Here, we use an external UV-source to direct photons into stacked waveguide reactors where we scatter the photons uniformly over the length of the waveguide to thin films of TiO 2-catalysts. In conclusion, we also show 4.5 times improvement in activity over uniform scatterer designs, demonstrate a degradation of 67% of the organic dye, and characterize themore » degradation rate constant.« less
Ahsan, Syed Saad; Pereyra, Brandon; Jung, Erica E; Erickson, David
2014-10-20
Most existing photobioreactors do a poor job of distributing light uniformly due to shading effects. One method by which this could be improved is through the use of internal wave-guiding structures incorporating engineered light scattering schemes. By varying the density of these scatterers, one can control the spatial distribution of light inside the reactor enabling better uniformity of illumination. Here, we compare a number of light scattering schemes and evaluate their ability to enhance biomass accumulation. We demonstrate a design for a gradient distribution of surface scatterers with uniform lateral scattering intensity that is superior for algal biomass accumulation, resulting in a 40% increase in the growth rate.
NASA Astrophysics Data System (ADS)
Delfani, M. R.; Latifi Shahandashti, M.
2017-09-01
In this paper, within the complete form of Mindlin's second strain gradient theory, the elastic field of an isolated spherical inclusion embedded in an infinitely extended homogeneous isotropic medium due to a non-uniform distribution of eigenfields is determined. These eigenfields, in addition to eigenstrain, comprise eigen double and eigen triple strains. After the derivation of a closed-form expression for Green's function associated with the problem, two different cases of non-uniform distribution of the eigenfields are considered as follows: (i) radial distribution, i.e. the distributions of the eigenfields are functions of only the radial distance of points from the centre of inclusion, and (ii) polynomial distribution, i.e. the distributions of the eigenfields are polynomial functions in the Cartesian coordinates of points. While the obtained solution for the elastic field of the latter case takes the form of an infinite series, the solution to the former case is represented in a closed form. Moreover, Eshelby's tensors associated with the two mentioned cases are obtained.
NASA Astrophysics Data System (ADS)
Yang, Ce; Wang, Yingjun; Lao, Dazhong; Tong, Ding; Wei, Longyu; Liu, Yixiong
2016-08-01
The inlet recirculation characteristics of double suction centrifugal compressor with unsymmetrical inlet structures were studied in numerical method, mainly focused on three issues including the amounts and differences of the inlet recirculation in different working conditions, the circumferential non-uniform distributions of the inlet recirculation, the recirculation velocity distributions of the upstream slot of the rear impeller. The results show that there are some differences between the recirculation of the front impeller and that of the rear impeller in whole working conditions. In design speed, the recirculation flow rate of the rear impeller is larger than that of the front impeller in the large flow range, but in the small flow range, the recirculation flow rate of the rear impeller is smaller than that of the front impeller. In different working conditions, the recirculation velocity distributions of the front and rear impeller are non-uniform along the circumferential direction and their non-uniform extents are quite different. The circumferential non-uniform extent of the recirculation velocity varies with the working conditions change. The circumferential non-uniform extent of the recirculation velocity of front impeller and its distribution are determined by the static pressure distribution of the front impeller, but that of the rear impeller is decided by the coupling effects of the inlet flow distortion of the rear impeller, the circumferential unsymmetrical distribution of the upstream slot and the asymmetric structure of the volute. In the design flow and small flow conditions, the recirculation velocities at different circumferential positions of the mean line of the upstream slot cross-section of the rear impeller are quite different, and the recirculation velocities distribution forms at both sides of the mean line are different. The recirculation velocity distributions in the cross-section of the upstream slot depend on the static pressure distributions in the intake duct.
Apparatus and process to enhance the uniform formation of hollow glass microspheres
Schumacher, Ray F
2013-10-01
A process and apparatus is provided for enhancing the formation of a uniform population of hollow glass microspheres. A burner head is used which directs incoming glass particles away from the cooler perimeter of the flame cone of the gas burner and distributes the glass particles in a uniform manner throughout the more evenly heated portions of the flame zone. As a result, as the glass particles are softened and expand by a released nucleating gas so as to form a hollow glass microsphere, the resulting hollow glass microspheres have a more uniform size and property distribution as a result of experiencing a more homogenous heat treatment process.
Effect of Thermal Gradient on Vibration of Non-uniform Visco-elastic Rectangular Plate
NASA Astrophysics Data System (ADS)
Khanna, Anupam; Kaur, Narinder
2016-04-01
Here, a theoretical model is presented to analyze the effect of bilinear temperature variations on vibration of non-homogeneous visco-elastic rectangular plate with non-uniform thickness. Non-uniformity in thickness of the plate is assumed linear in one direction. Since plate's material is considered as non-homogeneous, authors characterized non-homogeneity in poisson ratio and density of the plate's material exponentially in x-direction. Plate is supposed to be clamped at the ends. Deflection for first two modes of vibration is calculated by using Rayleigh-Ritz technique and tabulated for various values of plate's parameters i.e. taper constant, aspect ratio, non-homogeneity constants and thermal gradient. Comparison of present findings with existing literature is also provided in tabular and graphical manner.
NASA Astrophysics Data System (ADS)
Allen, C. S.; Korkan, K. D.
1991-01-01
A methodology for predicting the performance and acoustics of counterrotating propeller configurations was modified to take into account the effects of a non-uniform free stream velocity distribution entering the disk plane. The method utilizes the analytical techniques of Lock and Theodorson as described by Davidson to determine the influence of the non-uniform free stream velocity distribution in the prediction of the steady aerodynamic loads. The unsteady load contribution is determined according to the procedure of Leseture with rigid helical tip vortices simulating the previous rotations of each propeller. The steady and unsteady loads are combined to obtain the total blade loading required for acoustic prediction employing the Ffowcs Williams-Hawking equation as simplified by Succi with the assumption of compact sources. The numerical method is used to redesign the previous commuter class counterrotating propeller configuration of Denner. The specifications, performance, and acoustics of the new design are compared with the results of Denner thereby determining the influence of the non-uniform free stream velocity distribution on these metrics.
A novel polyimide based micro heater with high temperature uniformity
Yu, Shifeng; Wang, Shuyu; Lu, Ming; ...
2017-02-06
MEMS based micro heaters are a key component in micro bio-calorimetry, nondispersive infrared gas sensors, semiconductor gas sensors and microfluidic actuators. A micro heater with a uniform temperature distribution in the heating area and short response time is desirable in ultrasensitive temperature-dependent measurements. In this study, we propose a novel micro heater design to reach a uniform temperature in a large heating area by optimizing the heating power density distribution in the heating area. A polyimide membrane is utilized as the substrate to reduce the thermal mass and heat loss which allows for fast thermal response as well as amore » simplified fabrication process. A gold and titanium heating element is fabricated on the flexible polyimide substrate using the standard MEMS technique. The temperature distribution in the heating area for a certain power input is measured by an IR camera, and is consistent with FEA simulation results. Finally, this design can achieve fast response and uniform temperature distribution, which is quite suitable for the programmable heating such as impulse and step driving.« less
A novel polyimide based micro heater with high temperature uniformity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Shifeng; Wang, Shuyu; Lu, Ming
MEMS based micro heaters are a key component in micro bio-calorimetry, nondispersive infrared gas sensors, semiconductor gas sensors and microfluidic actuators. A micro heater with a uniform temperature distribution in the heating area and short response time is desirable in ultrasensitive temperature-dependent measurements. In this study, we propose a novel micro heater design to reach a uniform temperature in a large heating area by optimizing the heating power density distribution in the heating area. A polyimide membrane is utilized as the substrate to reduce the thermal mass and heat loss which allows for fast thermal response as well as amore » simplified fabrication process. A gold and titanium heating element is fabricated on the flexible polyimide substrate using the standard MEMS technique. The temperature distribution in the heating area for a certain power input is measured by an IR camera, and is consistent with FEA simulation results. Finally, this design can achieve fast response and uniform temperature distribution, which is quite suitable for the programmable heating such as impulse and step driving.« less
Gravitational Wakes Sizes from Multiple Cassini Radio Occultations of Saturn's Rings
NASA Astrophysics Data System (ADS)
Marouf, E. A.; Wong, K. K.; French, R. G.; Rappaport, N. J.; McGhee, C. A.; Anabtawi, A.
2016-12-01
Voyager and Cassini radio occultation extinction and forward scattering observations of Saturn's C-Ring and Cassini Division imply power law particle size distributions extending from few millimeters to several meters with power law index in the 2.8 to 3.2 range, depending on the specific ring feature. We extend size determination to the elongated and canted particle clusters (gravitational wakes) known to permeate Saturn's A- and B-Rings. We use multiple Cassini radio occultation observations over a range of ring opening angle B and wake viewing angle α to constrain the mean wake width W and thickness/height H, and average ring area coverage fraction. The rings are modeled as randomly blocked diffraction screen in the plane normal to the incidence direction. Collective particle shadows define the blocked area. The screen's transmittance is binary: blocked or unblocked. Wakes are modeled as thin layer of elliptical cylinders populated by random but uniformly distributed spherical particles. The cylinders can be immersed in a "classical" layer of spatially uniformly distributed particles. Numerical simulations of model diffraction patterns reveal two distinct components: cylindrical and spherical. The first dominates at small scattering angles and originates from specific locations within the footprint of the spacecraft antenna on the rings. The second dominates at large scattering angles and originates from the full footprint. We interpret Cassini extinction and scattering observations in the light of the simulation results. We compute and remove contribution of the spherical component to observed scattered signal spectra assuming known particle size distribution. A large residual spectral component is interpreted as contribution of cylindrical (wake) diffraction. Its angular width determines a cylindrical shadow width that depends on the wake parameters (W,H) and the viewing geometry (α,B). Its strength constrains the mean fractional area covered (optical depth), hence constrains the mean wakes spacing. Self-consistent (W,H) are estimated using least-square fit to results from multiple occultations. Example results for observed scattering by several inner A-Ring features suggest particle clusters (wakes) that are few tens of meters wide and several meters thick.
Percolation of fracture networks and stereology
NASA Astrophysics Data System (ADS)
Thovert, Jean-Francois; Mourzenko, Valeri; Adler, Pierre
2017-04-01
The overall properties of fractured porous media depend on the percolative character of the fracture network in a crucial way. The most important examples are permeability and transport. In a recent systematic study, a very wide range of regular, irregular and random fracture shapes is considered, in monodisperse or polydisperse networks containing fractures with different shapes and/or sizes. A simple and new model involving a dimensionless density and a new shape factor is proposed for the percolation threshold, which accounts very efficiently for the influence of the fracture shape. It applies with very good accuracy to monodisperse or moderately polydisperse networks, and provides a good first estimation in other situations. A polydispersity index is shown to control the need for a correction, and the corrective term is modelled for the investigated size distributions. Moreover, and this is crucial for practical applications, the relevant quantities which are present in the expression of the percolation threshold can all be determined from trace maps. An exact and complete set of relations can be derived when the fractures are assumed to be Identical, Isotropically Oriented and Uniformly Distributed (I2OUD). Therefore, the dimensionless density of such networks can be derived directly from the trace maps and its percolating character can be a priori predicted. These relations involve the first five moments of the trace lengths. It is clear that the higher order moments are sensitive to truncation due to the boundaries of the sampling domain. However, it can be shown that the truncation effect can be fully taken into account and corrected, for any fracture shape, size and orientation distributions, if the fractures are spatially uniformly distributed. Systematic applications of these results are made to real fracture networks that we previously analyzed by other means and to numerically simulated networks. It is important to know if the stereological results and their applications can be extended to networks which are not I2OUD. In other words, for a given trace map, an equivalent I2OUD network is defined whose percolating character and permeability are readily deduced. The conditions under which these predicted properties are not too far from the real properties are under investigation.
Modeling bursts and heavy tails in human dynamics
NASA Astrophysics Data System (ADS)
Vázquez, Alexei; Oliveira, João Gama; Dezsö, Zoltán; Goh, Kwang-Il; Kondor, Imre; Barabási, Albert-László
2006-03-01
The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behavior into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. Here we provide direct evidence that for five human activity patterns, such as email and letter based communications, web browsing, library visits and stock trading, the timing of individual human actions follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. In contrast, priority blind execution is well approximated by uniform interevent statistics. We discuss two queuing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can hadle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution P(τw)˜τw-α with α=3/2 . The second model imposes limitations on the queue length, resulting in a heavy tailed waiting time distribution characterized by α=1 . We provide empirical evidence supporting the relevance of these two models to human activity patterns, showing that while emails, web browsing and library visitation display α=1 , the surface mail based communication belongs to the α=3/2 universality class. Finally, we discuss possible extension of the proposed queuing models and outline some future challenges in exploring the statistical mechanics of human dynamics.
NASA Astrophysics Data System (ADS)
Shchukin, V. G.; Popov, V. N.
2017-10-01
One of the perspective ways to improve the operational properties of parts of machines during induction treatment of their surfaces is the modification of the melt by specially prepared nanoscale particles of refractory compounds (carbides, nitrides, carbonitrides, etc.). This approach allows us to increase the number of crystallization centers and to refine the structural components of the solidified metal. The resulting high dispersity and homogeneity of crystalline grains favorably affect the quality of the treated surfaces. 3D numerical simulation of thermophysical processes in the modification of the surface layer of metal in a moving substrate was carried out. It is assumed that the surface of the substrate is covered with a layer of specially prepared nanoscale particles of a refractory compound, which, upon penetration into the melt, are uniformly distributed in it. The possibility of applying a high-frequency electromagnetic field of high power for heating and melting of a metal (iron) for the purpose of its subsequent modification is investigated. The distribution of electromagnetic energy in the metal is described by empirical formulas. Melting of the metal is considered in the Stefan approximation, and upon solidification it is assumed that all nanoparticles serve as centers for volume-sequential crystallization. Calculations were carried out with the following parameters: specific power p0 = 35 and 40 kW/cm2 at frequency f = 440 and 1200 kHz, the substrate velocity V = 0.5-2.5 cm/s, the nanoparticles' size is 50 nm and concentration Np = 2.0 . 109 cm-3. Based on the results obtained in a quasi-stationary formulation, the distribution of the temperature field, the dimensions of the melting and crystallization zones, the change in the solid fraction in the two-phase zone, the area of the treated substrate surface, depending on the speed of its movement and induction heating characteristics were estimated.
Recent Upgrades to the NASA Ames Mars General Circulation Model: Applications to Mars' Water Cycle
NASA Astrophysics Data System (ADS)
Hollingsworth, Jeffery L.; Kahre, M. A.; Haberle, R. M.; Montmessin, F.; Wilson, R. J.; Schaeffer, J.
2008-09-01
We report on recent improvements to the NASA Ames Mars general circulation model (GCM), a robust 3D climate-modeling tool that is state-of-the-art in terms of its physics parameterizations and subgrid-scale processes, and which can be applied to investigate physical and dynamical processes of the present (and past) Mars climate system. The most recent version (gcm2.1, v.24) of the Ames Mars GCM utilizes a more generalized radiation code (based on a two-stream approximation with correlated k's); an updated transport scheme (van Leer formulation); a cloud microphysics scheme that assumes a log-normal particle size distribution whose first two moments are treated as atmospheric tracers, and which includes the nucleation, growth and sedimentation of ice crystals. Atmospheric aerosols (e.g., dust and water-ice) can either be radiatively active or inactive. We apply this version of the Ames GCM to investigate key aspects of the present water cycle on Mars. Atmospheric dust is partially interactive in our simulations; namely, the radiation code "sees" a prescribed distribution that follows the MGS thermal emission spectrometer (TES) year-one measurements with a self-consistent vertical depth scale that varies with season. The cloud microphysics code interacts with a transported dust tracer column whose surface source is adjusted to maintain the TES distribution. The model is run from an initially dry state with a better representation of the north residual cap (NRC) which accounts for both surface-ice and bare-soil components. A seasonally repeatable water cycle is obtained within five Mars years. Our sub-grid scale representation of the NRC provides for a more realistic flux of moisture to the atmosphere and a much drier water cycle consistent with recent spacecraft observations (e.g., Mars Express PFS, corrected MGS/TES) compared to models that assume a spatially uniform and homogeneous north residual polar cap.
Modeling bursts and heavy tails in human dynamics.
Vázquez, Alexei; Oliveira, João Gama; Dezsö, Zoltán; Goh, Kwang-Il; Kondor, Imre; Barabási, Albert-László
2006-03-01
The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behavior into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. Here we provide direct evidence that for five human activity patterns, such as email and letter based communications, web browsing, library visits and stock trading, the timing of individual human actions follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. In contrast, priority blind execution is well approximated by uniform interevent statistics. We discuss two queuing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can handle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution P(tau(w)) approximately tau(w)(-alpha) with alpha=3/2. The second model imposes limitations on the queue length, resulting in a heavy tailed waiting time distribution characterized by alpha=1. We provide empirical evidence supporting the relevance of these two models to human activity patterns, showing that while emails, web browsing and library visitation display alpha=1, the surface mail based communication belongs to the alpha=3/2 universality class. Finally, we discuss possible extension of the proposed queuing models and outline some future challenges in exploring the statistical mechanics of human dynamics.
Distribution and regularity of injection from a multicylinder fuel-injection pump
NASA Technical Reports Server (NTRS)
Rothrock, A M; Marsh, E T
1936-01-01
This report presents the results of performance test conducted on a six-cylinder commercial fuel-injection pump that was adjusted to give uniform fuel distribution among the cylinders at a throttle setting of 0.00038 pound per injection and a pump speed of 750 revolutions per minute. The throttle setting and pump speed were then varied through the operating range to determine the uniformity of distribution and regularity of injection.
NASA Astrophysics Data System (ADS)
Wang, P.; Wang, K.; Hawkes, A.; Horton, B. P.; Engelhart, S. E.; Nelson, A. R.; Witter, R. C.
2011-12-01
Abrupt coastal subsidence induced by the great AD 1700 Cascadia earthquake has been estimated from paleoseismic evidence of buried soils and overlying mud and associated tsunamis deposits. These records have been modeled using a rather uniform rupture model, a mirror image of the uniform interseismic fault locking based on modern GPS observations. However, as seen in other megathrust earthquakes such as at Sumatra, Chile, and Alaska, the rupture must have had multiple patches of concentrated slip. Variable moment release is also seen in the 2011 Tohoku-Oki earthquake in Japan, although there is only one patch. The use of a uniform rupture scenario for Cascadia is due mainly to the poor resolving power of the previous paleoseismic data. In this work, we invoke recently obtained more precise data from detailed microfossil studies to better constrain the slip distribution. Our 3-D elastic dislocation model allows the fault slip to vary along strike. Along any profile in the dip direction, we assume a bell-shaped slip distribution with the peak value scaling with local rupture width, consistent with rupture mechanics. We found that the coseismic slip is large in central Cascadia, and areas of high moment release are separated by areas of low moment release. The amount of slip in northern and southern Cascadia is poorly constrained. Although data uncertainties are large, the coastal variable subsidence can be explained with multiple slip patches. For example, there is an area near Alsea Bay, Oregon (about 44.5°N) that, in accordance with the minimum coseismic subsidence estimated by the microfossil data, had very little slip in the 1700 event. This area approximately coincides with a segment boundary previously defined on the basis of gravity anomalies. There is also reported evidence for the presence of a subducting seamount in this area, and the seamount might be responsible for impeding rupture during large earthquakes. The nature of this rupture barrier and whether it is a persistent feature are important topics of future research. Our results indicate that there is not always a one-to-one correlation between areas of more complete interseismic locking and larger coseismic slip.
NASA Astrophysics Data System (ADS)
Weathers, T. S.; Ginn, T. R.; Spycher, N.; Barkouki, T. H.; Fujita, Y.; Smith, R. W.
2009-12-01
Subsurface contamination is often mitigated with an injection/extraction well system. An understanding of heterogeneities within this radial flowfield is critical for modeling, prediction, and remediation of the subsurface. We address this using a Lagrangian approach: instead of depicting spatial extents of solutes in the subsurface we focus on their arrival distribution at the control well(s). A well-to-well treatment system that incorporates in situ microbially-mediated ureolysis to induce calcite precipitation for the immobilization of strontium-90 has been explored at the Vadose Zone Research Park (VZRP) near Idaho Falls, Idaho. PHREEQC2 is utilized to model the kinetically-controlled ureolysis and consequent calcite precipitation. PHREEQC2 provides a one-dimensional advective-dispersive transport option that can be and has been used in streamtube ensemble models. Traditionally, each streamtube maintains uniform velocity; however in radial flow in homogeneous media, the velocity within any given streamtube is variable in space, being highest at the input and output wells and approaching a minimum at the midpoint between the wells. This idealized velocity variability is of significance if kinetic reactions are present with multiple components, if kinetic reaction rates vary in space, if the reactions involve multiple phases (e.g. heterogeneous reactions), and/or if they impact physical characteristics (porosity/permeability), as does ureolytically driven calcite precipitation. Streamtube velocity patterns for any particular configuration of injection and withdrawal wells are available as explicit calculations from potential theory, and also from particle tracking programs. To approximate the actual spatial distribution of velocity along streamtubes, we assume idealized non-uniform velocity associated with homogeneous media. This is implemented in PHREEQC2 via a non-uniform spatial discretization within each streamtube that honors both the streamtube’s travel time and the idealized “fast-slow-fast” nonuniform velocity along the streamline. Breakthrough curves produced by each simulation are weighted by the path-respective flux fractions (obtained by deconvolution of tracer tests conducted at the VZRP) to obtain the flux-average of flow contributions to the observation well. Breakthrough data from urea injection experiments performed at the VZRP are compared to the model results from the PHREEQC2 variable velocity ensemble.
Flow coating apparatus and method of coating
Hanumanthu, Ramasubrahmaniam; Neyman, Patrick; MacDonald, Niles; Brophy, Brenor; Kopczynski, Kevin; Nair, Wood
2014-03-11
Disclosed is a flow coating apparatus, comprising a slot that can dispense a coating material in an approximately uniform manner along a distribution blade that increases uniformity by means of surface tension and transfers the uniform flow of coating material onto an inclined substrate such as for example glass, solar panels, windows or part of an electronic display. Also disclosed is a method of flow coating a substrate using the apparatus such that the substrate is positioned correctly relative to the distribution blade, a pre-wetting step is completed where both the blade and substrate are completed wetted with a pre-wet solution prior to dispensing of the coating material onto the distribution blade from the slot and hence onto the substrate. Thereafter the substrate is removed from the distribution blade and allowed to dry, thereby forming a coating.
Stress intensity factors in a hollow cylinder containing a radial crack
NASA Technical Reports Server (NTRS)
Delale, F.
1980-01-01
An exact formulation of the plane elasticity problem for a hollow cylinder or a disk containing a radial crack is given. The crack may be an external edge crack, an internal edge crack, or an embedded crack. It is assumed that on the crack surfaces the shear traction is zero and the normal traction is an arbitrary function of r. For various crack geometries and radius ratios, the numerical results are obtained for a uniform crack surface pressure, for a uniform pressure acting on the inside wall of the cylinder, and for a rotating disk.
Stress intensity factors in a hollow cylinder containing a radial crack
NASA Technical Reports Server (NTRS)
Delale, F.; Erdogan, F.
1982-01-01
In this paper, an exact formulation of the plane elasticity problem for a hollow cylinder or a disk containing a radial crack is given. The crack may be an external edge crack, an internal edge crack, or an embedded crack. It is assumed that on the crack surfaces the shear traction is zero, and the normal traction is an arbitrary function of radius. For various crack geometries and radius ratios, the numerical results are obtained for a uniform crack surface pressure, for a uniform pressure acting on the inside wall of the cylinder, and for a rotating disk.
NASA Astrophysics Data System (ADS)
Il'ichev, A. T.; Savin, A. S.
2017-12-01
We consider a planar evolution problem for perturbations of the ice cover by a dipole starting its uniform rectilinear horizontal motion in a column of an initially stationary fluid. Using asymptotic Fourier analysis, we show that at supercritical velocities, waves of two types form on the water-ice interface. We describe the process of establishing these waves during the dipole motion. We assume that the fluid is ideal and incompressible and its motion is potential. The ice cover is modeled by the Kirchhoff-Love plate.
Two-dimensional grid-free compressive beamforming.
Yang, Yang; Chu, Zhigang; Xu, Zhongming; Ping, Guoli
2017-08-01
Compressive beamforming realizes the direction-of-arrival (DOA) estimation and strength quantification of acoustic sources by solving an underdetermined system of equations relating microphone pressures to a source distribution via compressive sensing. The conventional method assumes DOAs of sources to lie on a grid. Its performance degrades due to basis mismatch when the assumption is not satisfied. To overcome this limitation for the measurement with plane microphone arrays, a two-dimensional grid-free compressive beamforming is developed. First, a continuum based atomic norm minimization is defined to denoise the measured pressure and thus obtain the pressure from sources. Next, a positive semidefinite programming is formulated to approximate the atomic norm minimization. Subsequently, a reasonably fast algorithm based on alternating direction method of multipliers is presented to solve the positive semidefinite programming. Finally, the matrix enhancement and matrix pencil method is introduced to process the obtained pressure and reconstruct the source distribution. Both simulations and experiments demonstrate that under certain conditions, the grid-free compressive beamforming can provide high-resolution and low-contamination imaging, allowing accurate and fast estimation of two-dimensional DOAs and quantification of source strengths, even with non-uniform arrays and noisy measurements.
Exact solutions to a spatially extended model of kinase-receptor interaction.
Szopa, Piotr; Lipniacki, Tomasz; Kazmierczak, Bogdan
2011-10-01
B and Mast cells are activated by the aggregation of the immune receptors. Motivated by this phenomena we consider a simple spatially extended model of mutual interaction of kinases and membrane receptors. It is assumed that kinase activates membrane receptors and in turn the kinase molecules bound to the active receptors are activated by transphosphorylation. Such a type of interaction implies positive feedback and may lead to bistability. In this study we apply the Steklov eigenproblem theory to analyze the linearized model and find exact solutions in the case of non-uniformly distributed membrane receptors. This approach allows us to determine the critical value of receptor dephosphorylation rate at which cell activation (by arbitrary small perturbation of the inactive state) is possible. We found that cell sensitivity grows with decreasing kinase diffusion and increasing anisotropy of the receptor distribution. Moreover, these two effects are cooperating. We showed that the cell activity can be abruptly triggered by the formation of the receptor aggregate. Since the considered activation mechanism is not based on receptor crosslinking by polyvalent antigens, the proposed model can also explain B cell activation due to receptor aggregation following binding of monovalent antigens presented on the antigen presenting cell.
Vertical distribution of structural components in corn stover
Johnson, Jane M. F.; Karlen, Douglas L.; Gresham, Garold L.; ...
2014-11-17
In the United States, corn ( Zea mays L.) stover has been targeted for second generation fuel production and other bio-products. Our objective was to characterize sugar and structural composition as a function of vertical distribution of corn stover (leaves and stalk) that was sampled at physiological maturity and about three weeks later from multiple USA locations. A small subset of samples was assessed for thermochemical composition. Concentrations of lignin, glucan, and xylan were about 10% greater at grain harvest than at physiological maturity, but harvestable biomass was about 25% less due to stalk breakage. Gross heating density above themore » ear averaged 16.3 ± 0.40 MJ kg⁻¹, but with an alkalinity measure of 0.83 g MJ⁻¹, slagging is likely to occur during gasification. Assuming a stover harvest height of 10 cm, the estimated ethanol yield would be >2500 L ha⁻¹, but it would be only 1000 L ha⁻¹ if stover harvest was restricted to the material from above the primary ear. Vertical composition of corn stover is relatively uniform; thus, decision on cutting height may be driven by agronomic, economic and environmental considerations.« less
Vertical distribution of structural components in corn stover
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jane M. F.; Karlen, Douglas L.; Gresham, Garold L.
In the United States, corn ( Zea mays L.) stover has been targeted for second generation fuel production and other bio-products. Our objective was to characterize sugar and structural composition as a function of vertical distribution of corn stover (leaves and stalk) that was sampled at physiological maturity and about three weeks later from multiple USA locations. A small subset of samples was assessed for thermochemical composition. Concentrations of lignin, glucan, and xylan were about 10% greater at grain harvest than at physiological maturity, but harvestable biomass was about 25% less due to stalk breakage. Gross heating density above themore » ear averaged 16.3 ± 0.40 MJ kg⁻¹, but with an alkalinity measure of 0.83 g MJ⁻¹, slagging is likely to occur during gasification. Assuming a stover harvest height of 10 cm, the estimated ethanol yield would be >2500 L ha⁻¹, but it would be only 1000 L ha⁻¹ if stover harvest was restricted to the material from above the primary ear. Vertical composition of corn stover is relatively uniform; thus, decision on cutting height may be driven by agronomic, economic and environmental considerations.« less
Vertical distribution of structural components in corn stover
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jane M. F. Johnson; Douglas L. Karlen; Garold L. Gresham
In the United States, corn (Zea mays L.) stover has been targeted for second generation fuel production and other bio-products. Our objective was to characterize sugar and structural composition as a function of vertical distribution of corn stover (leaves and stalk) that was sampled at physiological maturity and about three weeks later from multiple USA locations. A small subset of samples was assessed for thermochemical composition. Concentrations of lignin, glucan, and xylan were about 10% greater at grain harvest than at physiological maturity, but harvestable biomass was about 25% less due to stalk breakage. Gross heating density above the earmore » averaged 16.3 ± 0.40 MJ kg?¹, but with an alkalinity measure of 0.83 g MJ?¹, slagging is likely to occur during gasification. Assuming a stover harvest height of 10 cm, the estimated ethanol yield would be >2500 L ha?¹, but it would be only 1000 L ha?¹ if stover harvest was restricted to the material from above the primary ear. Vertical composition of corn stover is relatively uniform; thus, decision on cutting height may be driven by agronomic, economic and environmental considerations.« less
The directed self-assembly for the surface patterning by electron beam II
NASA Astrophysics Data System (ADS)
Nakagawa, Sachiko T.
2015-03-01
When a low-energy electron beam (EB) or a low-energy ion beam (IB) irradiates a crystal of zincblende (ZnS)-type as crystalline Si (c-Si), a very similar {311} planar defect is often observed. Here, we used a molecular dynamics simulation for a c-Si that included uniformly distributed Frenkel-pairs, assuming a wide beam and sparse distribution of defects caused by each EB. We observed the formation of ? linear defects, which agglomerate to form planar defects labeled with the Miller index {311} as well as the case of IB irradiation. These were identified by a crystallographic analysis called pixel mapping (PM) method. The PM had suggested that self-interstitial atoms may be stabilized on a specific frame of a lattice made of invisible metastable sites in the ZnS-type crystal. This agglomeration appears as {311} planar defects. It was possible at a much higher temperature than room temperature,for example, at 1000 K. This implies that whatever disturbance may bring many SIAs in a ZnS-type crystal, elevated lattice vibration promotes self-organization of the SIAs to form {311} planar defects according to the frame of metastable lattice as is guided by a chart presented by crystallography.
Lin, Lawrence; Pan, Yi; Hedayat, A S; Barnhart, Huiman X; Haber, Michael
2016-01-01
Total deviation index (TDI) captures a prespecified quantile of the absolute deviation of paired observations from raters, observers, methods, assays, instruments, etc. We compare the performance of TDI using nonparametric quantile regression to the TDI assuming normality (Lin, 2000). This simulation study considers three distributions: normal, Poisson, and uniform at quantile levels of 0.8 and 0.9 for cases with and without contamination. Study endpoints include the bias of TDI estimates (compared with their respective theoretical values), standard error of TDI estimates (compared with their true simulated standard errors), and test size (compared with 0.05), and power. Nonparametric TDI using quantile regression, although it slightly underestimates and delivers slightly less power for data without contamination, works satisfactorily under all simulated cases even for moderate (say, ≥40) sample sizes. The performance of the TDI based on a quantile of 0.8 is in general superior to that of 0.9. The performances of nonparametric and parametric TDI methods are compared with a real data example. Nonparametric TDI can be very useful when the underlying distribution on the difference is not normal, especially when it has a heavy tail.
Spinning solutions in general relativity with infinite central density
NASA Astrophysics Data System (ADS)
Flammer, P. D.
2018-05-01
This paper presents general relativistic numerical simulations of uniformly rotating polytropes. Equations are developed using MSQI coordinates, but taking a logarithm of the radial coordinate. The result is relatively simple elliptical differential equations. Due to the logarithmic scale, we can resolve solutions with near-singular mass distributions near their center, while the solution domain extends many orders of magnitude larger than the radius of the distribution (to connect with flat space-time). Rotating solutions are found with very high central energy densities for a range of adiabatic exponents. Analytically, assuming the pressure is proportional to the energy density (which is true for polytropes in the limit of large energy density), we determine the small radius behavior of the metric potentials and energy density. This small radius behavior agrees well with the small radius behavior of large central density numerical results, lending confidence to our numerical approach. We compare results with rotating solutions available in the literature, which show good agreement. We study the stability of spherical solutions: instability sets in at the first maximum in mass versus central energy density; this is also consistent with results in the literature, and further lends confidence to the numerical approach.
Re-evaluation of model-based light-scattering spectroscopy for tissue spectroscopy
Lau, Condon; Šćepanović, Obrad; Mirkovic, Jelena; McGee, Sasha; Yu, Chung-Chieh; Fulghum, Stephen; Wallace, Michael; Tunnell, James; Bechtel, Kate; Feld, Michael
2009-01-01
Model-based light scattering spectroscopy (LSS) seemed a promising technique for in-vivo diagnosis of dysplasia in multiple organs. In the studies, the residual spectrum, the difference between the observed and modeled diffuse reflectance spectra, was attributed to single elastic light scattering from epithelial nuclei, and diagnostic information due to nuclear changes was extracted from it. We show that this picture is incorrect. The actual single scattering signal arising from epithelial nuclei is much smaller than the previously computed residual spectrum, and does not have the wavelength dependence characteristic of Mie scattering. Rather, the residual spectrum largely arises from assuming a uniform hemoglobin distribution. In fact, hemoglobin is packaged in blood vessels, which alters the reflectance. When we include vessel packaging, which accounts for an inhomogeneous hemoglobin distribution, in the diffuse reflectance model, the reflectance is modeled more accurately, greatly reducing the amplitude of the residual spectrum. These findings are verified via numerical estimates based on light propagation and Mie theory, tissue phantom experiments, and analysis of published data measured from Barrett’s esophagus. In future studies, vessel packaging should be included in the model of diffuse reflectance and use of model-based LSS should be discontinued. PMID:19405760
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balderson, M.J.; Kirkby, C.; Department of Medical Physics, Tom Baker Cancer Centre, Calgary, Alberta
In vitro evidence has suggested that radiation induced bystander effects may enhance non-local cell killing which may influence radiotherapy treatment planning paradigms. This work applies a bystander effect model, which has been derived from published in vitro data, to calculate equivalent uniform dose (EUD) and tumour control probability (TCP) and compare them with predictions from standard linear quadratic (LQ) models that assume a response due only to local absorbed dose. Comparisons between the models were made under increasing dose heterogeneity scenarios. Dose throughout the CTV was modeled with normal distributions, where the degree of heterogeneity was then dictated by changingmore » the standard deviation (SD). The broad assumptions applied in the bystander effect model are intended to place an upper limit on the extent of the results in a clinical context. The bystander model suggests a moderate degree of dose heterogeneity yields as good or better outcome compared to a uniform dose in terms of EUD and TCP. Intermediate risk prostate prescriptions of 78 Gy over 39 fractions had maximum EUD and TCP values at SD of around 5Gy. The plots only dropped below the uniform dose values for SD ∼ 10 Gy, almost 13% of the prescribed dose. The bystander model demonstrates the potential to deviate from the common local LQ model predictions as dose heterogeneity through a prostate CTV is varies. The results suggest the potential for allowing some degree of dose heterogeneity within a CTV, although further investigations of the assumptions of the bystander model are warranted.« less
Improving the Representation of Snow Crystal Properties with a Single-Moment Mircophysics Scheme
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Demek, Scott R.
2010-01-01
Single-moment microphysics schemes are utilized in an increasing number of applications and are widely available within numerical modeling packages, often executed in near real-time to aid in the issuance of weather forecasts and advisories. In order to simulate cloud microphysical and precipitation processes, a number of assumptions are made within these schemes. Snow crystals are often assumed to be spherical and of uniform density, and their size distribution intercept may be fixed to simplify calculation of the remaining parameters. Recently, the Canadian CloudSat/CALIPSO Validation Project (C3VP) provided aircraft observations of snow crystal size distributions and environmental state variables, sampling widespread snowfall associated with a passing extratropical cyclone on 22 January 2007. Aircraft instrumentation was supplemented by comparable surface estimations and sampling by two radars: the C-band, dual-polarimetric radar in King City, Ontario and the NASA CloudSat 94 GHz Cloud Profiling Radar. As radar systems respond to both hydrometeor mass and size distribution, they provide value when assessing the accuracy of cloud characteristics as simulated by a forecast model. However, simulation of the 94 GHz radar signal requires special attention, as radar backscatter is sensitive to the assumed crystal shape. Observations obtained during the 22 January 2007 event are used to validate assumptions of density and size distribution within the NASA Goddard six-class single-moment microphysics scheme. Two high resolution forecasts are performed on a 9-3-1 km grid, with C3VP-based alternative parameterizations incorporated and examined for improvement. In order to apply the CloudSat 94 GHz radar to model validation, the single scattering characteristics of various crystal types are used and demonstrate that the assumption of Mie spheres is insufficient for representing CloudSat reflectivity derived from winter precipitation. Furthermore, snow density and size distribution characteristics are allowed to vary with height, based upon direct aircraft estimates obtained from C3VP data. These combinations improve the representation of modeled clouds versus their radar-observed counterparts, based on profiles and vertical distributions of reflectivity. These meteorological events are commonplace within the mid-latitude cold season and present a challenge to operational forecasters. This study focuses on one event, likely representative of others during the winter season, and aims to improve the representation of snow for use in future operational forecasts.
Experimental and numerical modeling research of rubber material during microwave heating process
NASA Astrophysics Data System (ADS)
Chen, Hailong; Li, Tao; Li, Kunling; Li, Qingling
2018-05-01
This paper aims to investigate the heating behaviors of block rubber by experimental and simulated method. The COMSOL Multiphysics 5.0 software was utilized in numerical simulation work. The effects of microwave frequency, power and sample size on temperature distribution are examined. The effect of frequency on temperature distribution is obvious. The maximum and minimum temperatures of block rubber increase first and then decrease with frequency increasing. The microwave heating efficiency is maximum in the microwave frequency of 2450 MHz. However, more uniform temperature distribution is presented in other microwave frequencies. The influence of microwave power on temperature distribution is also remarkable. The smaller the power, the more uniform the temperature distribution on the block rubber. The effect of power on microwave heating efficiency is not obvious. The effect of sample size on temperature distribution is evidently found. The smaller the sample size, the more uniform the temperature distribution on the block rubber. However, the smaller the sample size, the lower the microwave heating efficiency. The results can serve as references for the research on heating rubber material by microwave technology.
Severity of Organized Item Theft in Computerized Adaptive Testing: A Simulation Study
ERIC Educational Resources Information Center
Yi, Qing; Zhang, Jinming; Chang, Hua-Hua
2008-01-01
Criteria had been proposed for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria resulted from theoretical derivations that assumed uniformly randomized item selection. This study investigated potential damage caused by organized item theft in computerized adaptive…
The Political Polarization of Women: Where Political Scientists Went Wrong.
ERIC Educational Resources Information Center
MacManus, Susan A.
Early research into women's political participation assumed erroneously that gender was more important than race, ethnicity, and class, that uniform commitment on women's issues would occur, and that only female officeholders could represent women. Based on attitudinal, participatory, and electoral data collected in Houston, Texas in 1977-78, this…
Predicting Precession Rates from Secular Dynamics for Extra-solar Multi-planet Systems
NASA Astrophysics Data System (ADS)
Van Laerhoven, Christa L.
2015-11-01
Considering the secular dynamics of multi-planet systems provides substantial insight into the interactions between planets in those systems. Secular interactions are those that don't involve knowing where a planet is along its orbit, and they dominate when planets are not involved in mean motion resonances. These interactions exchange angular momentum among the planets, evolving their eccentricities and inclinations. To second order in the planets' eccentricities and inclinations, the eccentricity and inclination perturbations are decoupled. Given the right variable choice, the relevant differential equations are linear and thus the eccentricity and inclination behaviors can be described as a sum of eigenmodes. Since the underlying structure of the secular eigenmodes can be calculated using only the planets' masses and semi-major axes, one can elucidate the eccentricity and inclination behavior of planets in exoplanet systems even without knowing the planets' current eccentricities and inclinations. I have calculated both the eccentricity and inclination secular eigenmodes for the population of known multi-planet systems whose planets have well determined masses and periods. Using this catalog, and assuming a Gausian distribution for the eigenmode amplitudes and a uniform distribution for the eigenmode phases, I have predicted what range of precession rates the planets may have. Generally, planets that have more than one eigenmode significantly contribute to their eccentricity ('groupies') can have a wide range of possible precession rates, while planets that are 'loners' have a narrow range of possible precession rates. One might have assumed that in any given system, the planets with shorter periods would have faster precession rates. However, I show that in systems where the planets suffer strong secular interactions this is not necessarily the case.
Post-processing of metal matrix composites by friction stir processing
NASA Astrophysics Data System (ADS)
Sharma, Vipin; Singla, Yogesh; Gupta, Yashpal; Raghuwanshi, Jitendra
2018-05-01
In metal matrix composites non-uniform distribution of reinforcement particles resulted in adverse affect on the mechanical properties. It is of great interest to explore post-processing techniques that can eliminate particle distribution heterogeneity. Friction stir processing is a relatively newer technique used for post-processing of metal matrix composites to improve homogeneity in particles distribution. In friction stir processing, synergistic effect of stirring, extrusion and forging resulted in refinement of grains, reduction of reinforcement particles size, uniformity in particles distribution, reduction in microstructural heterogeneity and elimination of defects.
Non-uniform muscle fat replacement along the proximodistal axis in Duchenne muscular dystrophy.
Hooijmans, M T; Niks, E H; Burakiewicz, J; Anastasopoulos, C; van den Berg, S I; van Zwet, E; Webb, A G; Verschuuren, J J G M; Kan, H E
2017-05-01
The progressive replacement of muscle tissue by fat in Duchenne muscular dystrophy (DMD) has been studied using quantitative MRI between, but not within, individual muscles. We studied fat replacement along the proximodistal muscle axis using the Dixon technique on a 3T MR scanner in 22 DMD patients and 12 healthy controls. Mean fat fractions per muscle per slice for seven lower and upper leg muscles were compared between and within groups assuming a parabolic distribution. Average fat fraction for a small central slice stack and a large coverage slice stack were compared to the value when the stack was shifted one slice (15 mm) up or down. Higher fat fractions were observed in distal and proximal muscle segments compared to the muscle belly in all muscles of the DMD subjects (p <0.001). A shift of 15 mm resulted in a difference in mean fat fraction which was on average 1-2% ranging up to 12% (p <0.01). The muscle end regions are exposed to higher mechanical strain, which points towards mechanical disruption of the sarcolemma as one of the key factors in the pathophysiology. Overall, this non-uniformity in fat replacement needs to be taken into account to prevent sample bias when applying quantitative MRI as biomarker in clinical trials for DMD. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Xie, T.-Q.; Zeidel, M. L.; Pan, Yingtian
2002-12-01
Most transitional cell tumorigenesis involves three stages of subcellular morphological changes: hyperplasia, dysplasia and neoplasia. Previous studies demonstrated that owing to its high spatial resolution and intermediate penetration depth, current OCT technology including endoscopic OCT could delineate the urothelium, submucosa and the upper muscular layers of the bladder wall. In this paper, we will discuss the sensitivity and limitations of OCT in diagnosing and staging bladder cancer. Based on histomorphometric evaluations of nuclear morphology, we modeled the resultant backscattering changes and the characteristic changes in OCT image contrast. In the theoretical modeling, we assumed that nuclei were the primary sources of scattering and were uniformly distributed in the uroepithelium, and compared with the results of the corresponding prior OCT measurements. According to our theoretical modeling, normal bladder shows a thin, uniform and low scattering urothelium, so does an inflammatory lesion except thickening in the submucosa. Compared with a normal bladder, a hyperplastic lesion exhibits a thickened, low scattering urothelium whereas a neoplastic lesion shows a thickened urothelium with increased backscattering. These results support our previous animal study that OCT has the potential to differentiate inflammation, hyperplasia, and neoplasia by quantifying the changes in urothelial thickening and backscattering. The results also suggest that OCT might not have the sensitivity to differentiate the subtle morphological changes between hyperplasia and dysplasia based on minor backscattering differences.
Transitions from trees to cycles in adaptive flow networks
NASA Astrophysics Data System (ADS)
Martens, Erik A.; Klemm, Konstantin
2017-11-01
Transport networks are crucial to the functioning of natural and technological systems. Nature features transport networks that are adaptive over a vast range of parameters, thus providing an impressive level of robustness in supply. Theoretical and experimental studies have found that real-world transport networks exhibit both tree-like motifs and cycles. When the network is subject to load fluctuations, the presence of cyclic motifs may help to reduce flow fluctuations and, thus, render supply in the network more robust. While previous studies considered network topology via optimization principles, here, we take a dynamical systems approach and study a simple model of a flow network with dynamically adapting weights (conductances). We assume a spatially non-uniform distribution of rapidly fluctuating loads in the sinks and investigate what network configurations are dynamically stable. The network converges to a spatially non-uniform stable configuration composed of both cyclic and tree-like structures. Cyclic structures emerge locally in a transcritical bifurcation as the amplitude of the load fluctuations is increased. The resulting adaptive dynamics thus partitions the network into two distinct regions with cyclic and tree-like structures. The location of the boundary between these two regions is determined by the amplitude of the fluctuations. These findings may explain why natural transport networks display cyclic structures in the micro-vascular regions near terminal nodes, but tree-like features in the regions with larger veins.
Xie, T; Zeidel, M; Pan, Yingtian
2002-12-02
Most transitional cell tumorigenesis involves three stages of subcellular morphological changes: hyperplasia, dysplasia and neoplasia. Previous studies demonstrated that owing to its high spatial resolution and intermediate penetration depth, current OCT technology including endoscopic OCT could delineate the urothelium, submucosa and the upper muscular layers of the bladder wall. In this paper, we will discuss the sensitivity and limitations of OCT in diagnosing and staging bladder cancer. Based on histomorphometric evaluations of nuclear morphology, we modeled the resultant backscattering changes and the characteristic changes in OCT image contrast. In the theoretical modeling, we assumed that nuclei were the primary sources of scattering and were uniformly distributed in the uroepithelium, and compared with the results of the corresponding prior OCT measurements. According to our theoretical modeling, normal bladder shows a thin, uniform and low scattering urothelium, so does an inflammatory lesion except thickening in the submucosa. Compared with a normal bladder, a hyperplastic lesion exhibits a thickened, low scattering urothelium whereas a neoplastic lesion shows a thickened urothelium with increased backscattering. These results support our previous animal study that OCT has the potential to differentiate inflammation, hyperplasia, and neoplasia by quantifying the changes in urothelial thickening and backscattering. The results also suggest that OCT might not have the sensitivity to differentiate the subtle morphological changes between hyperplasia and dysplasia based on minor backscattering differences.
Soliton matter in the two-dimensional linear sigma model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dodd, L.R.; Lohe, M.A.; Rossi, M.
1987-10-01
We consider a one-dimensional model of nuclear matter where the quark clusters are described by solutions of the sigma model on a linear lattice in the self-consistent mean field approximation. Exact expressions are given for the baglike solutions confined to a finite interval, corresponding in the infinite interval limit to the free solitons previously found by Campbell and Liao. Periodic, self-consistent solutions which satisfy Bloch's theorem are constructed. Their energies and associated quark sigma field distributions are calculated numerically as functions of the baryon spacing, and compared with those of the uniform quark plasma. The predicted configuration of the groundmore » state depends critically on the assumed manner of filling the lowest band of quark single-particle levels, and on the density. In the absence of additional repulsive forces in the model, we find that the high density massless quark plasma is energetically favored and that there is a smooth transition from the baglike state to a uniform plasma with nonvanishing sigma field at comparatively large lattice constants 2dapprox. =10m/sub q//sup -1/ (m/sub q/ is the quark mass). If dilute filling of the entire band is employed, the clustered state is stable and a first order phase transition can occur for a range of much smaller lattice spacings 2dapprox. =4m/sub q//sup -1/. .AE« less
A Pearson Random Walk with Steps of Uniform Orientation and Dirichlet Distributed Lengths
NASA Astrophysics Data System (ADS)
Le Caër, Gérard
2010-08-01
A constrained diffusive random walk of n steps in ℝ d and a random flight in ℝ d , which are equivalent, were investigated independently in recent papers (J. Stat. Phys. 127:813, 2007; J. Theor. Probab. 20:769, 2007, and J. Stat. Phys. 131:1039, 2008). The n steps of the walk are independent and identically distributed random vectors of exponential length and uniform orientation. Conditioned on the sum of their lengths being equal to a given value l, closed-form expressions for the distribution of the endpoint of the walk were obtained altogether for any n for d=1,2,4. Uniform distributions of the endpoint inside a ball of radius l were evidenced for a walk of three steps in 2D and of two steps in 4D. The previous walk is generalized by considering step lengths which have independent and identical gamma distributions with a shape parameter q>0. Given the total walk length being equal to 1, the step lengths have a Dirichlet distribution whose parameters are all equal to q. The walk and the flight above correspond to q=1. Simple analytical expressions are obtained for any d≥2 and n≥2 for the endpoint distributions of two families of walks whose q are integers or half-integers which depend solely on d. These endpoint distributions have a simple geometrical interpretation. Expressed for a two-step planar walk whose q=1, it means that the distribution of the endpoint on a disc of radius 1 is identical to the distribution of the projection on the disc of a point M uniformly distributed over the surface of the 3D unit sphere. Five additional walks, with a uniform distribution of the endpoint in the inside of a ball, are found from known finite integrals of products of powers and Bessel functions of the first kind. They include four different walks in ℝ3, two of two steps and two of three steps, and one walk of two steps in ℝ4. Pearson-Liouville random walks, obtained by distributing the total lengths of the previous Pearson-Dirichlet walks according to some specified probability law are finally discussed. Examples of unconstrained random walks, whose step lengths are gamma distributed, are more particularly considered.
2015-06-01
of uniform- versus nonuniform -pattern reconstruction, of transform function used, and of minimum randomly distributed measurements needed to...the radiation-frequency pattern’s reconstruction using uniform and nonuniform randomly distributed samples even though the pattern error manifests...5 Fig. 3 The nonuniform compressive-sensing reconstruction of the radiation
V/V(max) test applied to SMM gamma-ray bursts
NASA Technical Reports Server (NTRS)
Matz, S. M.; Higdon, J. C.; Share, G. H.; Messina, D. C.; Iadicicco, A.
1992-01-01
We have applied the V/V(max) test to candidate gamma-ray bursts detected by the Gamma-Ray Spectrometer (GRS) aboard the SMM satellite to examine quantitatively the uniformity of the burst source population. For a sample of 132 candidate bursts identified in the GRS data by an automated search using a single uniform trigger criterion we find average V/V(max) = 0.40 +/- 0.025. This value is significantly different from 0.5, the average for a uniform distribution in space of the parent population of burst sources; however, the shape of the observed distribution of V/V(max) is unusual and our result conflicts with previous measurements. For these reasons we can currently draw no firm conclusion about the distribution of burst sources.
Design and development of novel bandages for compression therapy.
Rajendran, Subbiyan; Anand, Subhash
2003-03-01
During the past few years there have been increasing concerns relating to the performance of bandages, especially their pressure distribution properties for the treatment of venous leg ulcers. This is because compression therapy is a complex system and requires two or multi-layer bandages, and the performance properties of each layer differs from other layers. The widely accepted sustained graduated compression mainly depends on the uniform pressure distribution of different layers of bandages, in which textile fibres and bandage structures play a major role. This article examines how the fibres, fibre blends and structures influence the absorption and pressure distribution properties of bandages. It is hoped that the research findings will help medical professionals, especially nurses, to gain an insight into the development of bandages. A total of 12 padding bandages have been produced using various fibres and fibre blends. A new technique that would facilitate good resilience and cushioning properties, higher and more uniform pressure distribution and enhanced water absorption and retention was adopted during the production. It has been found that the properties of developed padding bandages, which include uniform pressure distribution around the leg, are superior to existing commercial bandages and possess a number of additional properties required to meet the criteria stipulated for an ideal padding bandage. Results have indicated that none of the mostly used commercial padding bandages provide the required uniform pressure distribution around the limb.
Radioactive Iron Rain: Transporting 60Fe in Supernova Dust to the Ocean Floor
NASA Astrophysics Data System (ADS)
Fry, Brian J.; Fields, Brian D.; Ellis, John R.
2016-08-01
Several searches have found evidence of {}60{{Fe}} deposition, presumably from a near-Earth supernova (SN), with concentrations that vary in different locations on Earth. This paper examines various influences on the path of interstellar dust carrying {}60{{Fe}} from an SN through the heliosphere, with the aim of estimating the final global distribution on the ocean floor. We study the influences of magnetic fields, angle of arrival, wind, and ocean cycling of SN material on the concentrations at different locations. We find that the passage of SN material through the mesosphere/lower thermosphere has the greatest influence on the final global distribution, with ocean cycling causing lesser alteration as the SN material sinks to the ocean floor. SN distance estimates in previous works that assumed a uniform distribution are a good approximation. Including the effects on surface distributions, we estimate a distance of {46}-6+10 pc for an 8{--}10 {M}⊙ SN progenitor. This is consistent with an SN occurring within the Tuc-Hor stellar group ˜2.8 Myr ago, with SN material arriving on Earth ˜2.2 Myr ago. We note that the SN dust retains directional information to within 1◦ through its arrival in the inner solar system, so that SN debris deposition on inert bodies such as the Moon will be anisotropic, and thus could in principle be used to infer directional information. In particular, we predict that existing lunar samples should show measurable {}60{{Fe}} differences.
Italian Case Studies Modelling Complex Earthquake Sources In PSHA
NASA Astrophysics Data System (ADS)
Gee, Robin; Peruzza, Laura; Pagani, Marco
2017-04-01
This study presents two examples of modelling complex seismic sources in Italy, done in the framework of regional probabilistic seismic hazard assessment (PSHA). The first case study is for an area centred around Collalto Stoccaggio, a natural gas storage facility in Northern Italy, located within a system of potentially seismogenic thrust faults in the Venetian Plain. The storage exploits a depleted natural gas reservoir located within an actively growing anticline, which is likely driven by the Montello Fault, the underlying blind thrust. This fault has been well identified by microseismic activity (M<2) detected by a local seismometric network installed in 2012 (http://rete-collalto.crs.inogs.it/). At this time, no correlation can be identified between the gas storage activity and local seismicity, so we proceed with a PSHA that considers only natural seismicity, where the rates of earthquakes are assumed to be time-independent. The source model consists of faults and distributed seismicity to consider earthquakes that cannot be associated to specific structures. All potentially active faults within 50 km of the site are considered, and are modelled as 3D listric surfaces, consistent with the proposed geometry of the Montello Fault. Slip rates are constrained using available geological, geophysical and seismological information. We explore the sensitivity of the hazard results to various parameters affected by epistemic uncertainty, such as ground motions prediction equations with different rupture-to-site distance metrics, fault geometry, and maximum magnitude. The second case is an innovative study, where we perform aftershock probabilistic seismic hazard assessment (APSHA) in Central Italy, following the Amatrice M6.1 earthquake of August 24th, 2016 (298 casualties) and the subsequent earthquakes of Oct 26th and 30th (M6.1 and M6.6 respectively, no deaths). The aftershock hazard is modelled using a fault source with complex geometry, based on literature data and field evidence associated with the August mainshock. Earthquake activity rates during the very first weeks after the deadly earthquake were used to calibrated an Omori-Utsu decay curve, and the magnitude distribution of aftershocks is assumed to follow a Gutenberg-Richter distribution. We apply uniform and non-uniform spatial distribution of the seismicity across the fault source, by modulating the rates as a decreasing function of distance from the mainshock. The hazard results are computed for short-exposure periods (1 month, before the occurrences of October earthquakes) and compared to the background hazard given by law (MPS04), and to observations at some reference sites. We also show the results of disaggregation computed for the city of Amatrice. Finally, we attempt to update the results in light of the new "main" events that occurred afterwards in the region. All source modeling and hazard calculations are performed using the OpenQuake engine. We discuss the novelties of these works, and the benefits and limitations of both analyses, particularly in such different contexts of seismic hazard.
Statistical distributions of avalanche size and waiting times in an inter-sandpile cascade model
NASA Astrophysics Data System (ADS)
Batac, Rene; Longjas, Anthony; Monterola, Christopher
2012-02-01
Sandpile-based models have successfully shed light on key features of nonlinear relaxational processes in nature, particularly the occurrence of fat-tailed magnitude distributions and exponential return times, from simple local stress redistributions. In this work, we extend the existing sandpile paradigm into an inter-sandpile cascade, wherein the avalanches emanating from a uniformly-driven sandpile (first layer) is used to trigger the next (second layer), and so on, in a successive fashion. Statistical characterizations reveal that avalanche size distributions evolve from a power-law p(S)≈S-1.3 for the first layer to gamma distributions p(S)≈Sαexp(-S/S0) for layers far away from the uniformly driven sandpile. The resulting avalanche size statistics is found to be associated with the corresponding waiting time distribution, as explained in an accompanying analytic formulation. Interestingly, both the numerical and analytic models show good agreement with actual inventories of non-uniformly driven events in nature.
Simulation of air velocity in a vertical perforated air distributor
NASA Astrophysics Data System (ADS)
Ngu, T. N. W.; Chu, C. M.; Janaun, J. A.
2016-06-01
Perforated pipes are utilized to divide a fluid flow into several smaller streams. Uniform flow distribution requirement is of great concern in engineering applications because it has significant influence on the performance of fluidic devices. For industrial applications, it is crucial to provide a uniform velocity distribution through orifices. In this research, flow distribution patterns of a closed-end multiple outlet pipe standing vertically for air delivery in the horizontal direction was simulated. Computational Fluid Dynamics (CFD), a tool of research for enhancing and understanding design was used as the simulator and the drawing software SolidWorks was used for geometry setup. The main purpose of this work is to establish the influence of size of orifices, intervals between outlets, and the length of tube in order to attain uniformity of exit flows through a multi outlet perforated tube. However, due to the gravitational effect, the compactness of paddy increases gradually from top to bottom of dryer, uniform flow pattern was aimed for top orifices and larger flow for bottom orifices.
School Uniform Policies in Public Schools
ERIC Educational Resources Information Center
Brunsma, David L.
2006-01-01
The movement for school uniforms in public schools continues to grow despite the author's research indicating little if any impact on student behavior, achievement, and self-esteem. The author examines the distribution of uniform policies by region and demographics, the impact of these policies on perceptions of school climate and safety, and…
Aging transition in systems of oscillators with global distributed-delay coupling.
Rahman, B; Blyuss, K B; Kyrychko, Y N
2017-09-01
We consider a globally coupled network of active (oscillatory) and inactive (nonoscillatory) oscillators with distributed-delay coupling. Conditions for aging transition, associated with suppression of oscillations, are derived for uniform and gamma delay distributions in terms of coupling parameters and the proportion of inactive oscillators. The results suggest that for the uniform distribution increasing the width of distribution for the same mean delay allows aging transition to happen for a smaller coupling strength and a smaller proportion of inactive elements. For gamma distribution with sufficiently large mean time delay, it may be possible to achieve aging transition for an arbitrary proportion of inactive oscillators, as long as the coupling strength lies in a certain range.
Analysis of economics of a TV broadcasting satellite for additional nationwide TV programs
NASA Technical Reports Server (NTRS)
Becker, D.; Mertens, G.; Rappold, A.; Seith, W.
1977-01-01
The influence of a TV broadcasting satellite, transmitting four additional TV networks was analyzed. It is assumed that the cost of the satellite systems will be financed by the cable TV system operators. The additional TV programs increase income by attracting additional subscribers. Two economic models were established: (1) each local network is regarded as an independent economic unit with individual fees (cost price model) and (2) all networks are part of one public cable TV company with uniform fees (uniform price model). Assumptions are made for penetration as a function of subscription rates. Main results of the study are: the installation of a TV broadcasting satellite improves the economics of CTV-networks in both models; the overall coverage achievable by the uniform price model is significantly higher than that achievable by the cost price model.
NASA Astrophysics Data System (ADS)
Meng, Su; Chen, Jie; Sun, Jian
2017-10-01
This paper investigates the problem of observer-based output feedback control for networked control systems with non-uniform sampling and time-varying transmission delay. The sampling intervals are assumed to vary within a given interval. The transmission delay belongs to a known interval. A discrete-time model is first established, which contains time-varying delay and norm-bounded uncertainties coming from non-uniform sampling intervals. It is then converted to an interconnection of two subsystems in which the forward channel is delay-free. The scaled small gain theorem is used to derive the stability condition for the closed-loop system. Moreover, the observer-based output feedback controller design method is proposed by utilising a modified cone complementary linearisation algorithm. Finally, numerical examples illustrate the validity and superiority of the proposed method.
Impacts of relative permeability on CO2 phase behavior, phase distribution, and trapping mechanisms
NASA Astrophysics Data System (ADS)
Moodie, N.; McPherson, B. J. O. L.; Pan, F.
2015-12-01
A critical aspect of geologic carbon storage, a carbon-emissions reduction method under extensive review and testing, is effective multiphase CO2 flow and transport simulation. Relative permeability is a flow parameter particularly critical for accurate forecasting of multiphase behavior of CO2 in the subsurface. The relative permeability relationship assumed and especially the irreducible saturation of the gas phase greatly impacts predicted CO2 trapping mechanisms and long-term plume migration behavior. A primary goal of this study was to evaluate the impact of relative permeability on efficacy of regional-scale CO2 sequestration models. To accomplish this we built a 2-D vertical cross-section of the San Rafael Swell area of East-central Utah. This model simulated injection of CO2 into a brine aquifer for 30 years. The well was then shut-in and the CO2 plume behavior monitored for another 970 years. We evaluated five different relative permeability relationships to quantify their relative impacts on forecasted flow results of the model, with all other parameters maintained uniform and constant. Results of this analysis suggest that CO2 plume movement and behavior are significantly dependent on the specific relative permeability formulation assigned, including the assumed irreducible saturation values of CO2 and brine. More specifically, different relative permeability relationships translate to significant differences in CO2 plume behavior and corresponding trapping mechanisms.
Effect of heterogeneous investments on the evolution of cooperation in spatial public goods game.
Huang, Keke; Wang, Tao; Cheng, Yuan; Zheng, Xiaoping
2015-01-01
Understanding the emergence of cooperation in spatial public goods game remains a grand challenge across disciplines. In most previous studies, it is assumed that the investments of all the cooperators are identical, and often equal to 1. However, it is worth mentioning that players are diverse and heterogeneous when choosing actions in the rapidly developing modern society and researchers have shown more interest to the heterogeneity of players recently. For modeling the heterogeneous players without loss of generality, it is assumed in this work that the investment of a cooperator is a random variable with uniform distribution, the mean value of which is equal to 1. The results of extensive numerical simulations convincingly indicate that heterogeneous investments can promote cooperation. Specifically, a large value of the variance of the random variable can decrease the two critical values for the result of behavioral evolution effectively. Moreover, the larger the variance is, the better the promotion effect will be. In addition, this article has discussed the impact of heterogeneous investments when the coevolution of both strategy and investment is taken into account. Comparing the promotion effect of coevolution of strategy and investment with that of strategy imitation only, we can conclude that the coevolution of strategy and investment decreases the asymptotic fraction of cooperators by weakening the heterogeneity of investments, which further demonstrates that heterogeneous investments can promote cooperation in spatial public goods game.
Quantitative Mapping of Matrix Content and Distribution across the Ligament-to-Bone Insertion
Spalazzi, Jeffrey P.; Boskey, Adele L.; Pleshko, Nancy; Lu, Helen H.
2013-01-01
The interface between bone and connective tissues such as the Anterior Cruciate Ligament (ACL) constitutes a complex transition traversing multiple tissue regions, including non-calcified and calcified fibrocartilage, which integrates and enables load transfer between otherwise structurally and functionally distinct tissue types. The objective of this study was to investigate region-dependent changes in collagen, proteoglycan and mineral distribution, as well as collagen orientation, across the ligament-to-bone insertion site using Fourier transform infrared spectroscopic imaging (FTIR-I). Insertion site-related differences in matrix content were also evaluated by comparing tibial and femoral entheses. Both region- and site-related changes were observed. Collagen content was higher in the ligament and bone regions, while decreasing across the fibrocartilage interface. Moreover, interfacial collagen fibrils were aligned parallel to the ligament-bone interface near the ligament region, assuming a more random orientation through the bulk of the interface. Proteoglycan content was uniform on average across the insertion, while its distribution was relatively less variable at the tibial compared to the femoral insertion. Mineral was only detected in the calcified interface region, and its content increased exponentially across the mineralized fibrocartilage region toward bone. In addition to new insights into matrix composition and organization across the complex multi-tissue junction, findings from this study provide critical benchmarks for the regeneration of soft tissue-to-bone interfaces and integrative soft tissue repair. PMID:24019964
NASA Astrophysics Data System (ADS)
Zhang, Y.; Chen, W.; Li, J.
2013-12-01
Climate change may alter the spatial distribution, composition, structure, and functions of plant communities. Transitional zones between biomes, or ecotones, are particularly sensitive to climate change. Ecotones are usually heterogeneous with sparse trees. The dynamics of ecotones are mainly determined by the growth and competition of individual plants in the communities. Therefore it is necessary to calculate solar radiation absorbed by individual plants for understanding and predicting their responses to climate change. In this study, we developed an individual plant radiation model, IPR (version 1.0), to calculate solar radiation absorbed by individual plants in sparse heterogeneous woody plant communities. The model is developed based on geometrical optical relationships assuming crowns of woody plants are rectangular boxes with uniform leaf area density. The model calculates the fractions of sunlit and shaded leaf classes and the solar radiation absorbed by each class, including direct radiation from the sun, diffuse radiation from the sky, and scattered radiation from the plant community. The solar radiation received on the ground is also calculated. We tested the model by comparing with the analytical solutions of random distributions of plants. The tests show that the model results are very close to the averages of the random distributions. This model is efficient in computation, and is suitable for ecological models to simulate long-term transient responses of plant communities to climate change.
Ando, Tadashi; Yu, Isseki; Feig, Michael; Sugita, Yuji
2016-11-23
The cytoplasm of a cell is crowded with many different kinds of macromolecules. The macromolecular crowding affects the thermodynamics and kinetics of biological reactions in a living cell, such as protein folding, association, and diffusion. Theoretical and simulation studies using simplified models focus on the essential features of the crowding effects and provide a basis for analyzing experimental data. In most of the previous studies on the crowding effects, a uniform crowder size is assumed, which is in contrast to the inhomogeneous size distribution of macromolecules in a living cell. Here, we evaluate the free energy changes upon macromolecular association in a cell-like inhomogeneous crowding system via a theory of hard-sphere fluids and free energy calculations using Brownian dynamics trajectories. The inhomogeneous crowding model based on 41 different types of macromolecules represented by spheres with different radii mimics the physiological concentrations of macromolecules in the cytoplasm of Mycoplasma genitalium. The free energy changes of macromolecular association evaluated by the theory and simulations were in good agreement with each other. The crowder size distribution affects both specific and nonspecific molecular associations, suggesting that not only the volume fraction but also the size distribution of macromolecules are important factors for evaluating in vivo crowding effects. This study relates in vitro experiments on macromolecular crowding to in vivo crowding effects by using the theory of hard-sphere fluids with crowder-size heterogeneity.
Detecting binary neutron star systems with spin in advanced gravitational-wave detectors
NASA Astrophysics Data System (ADS)
Brown, Duncan A.; Harry, Ian; Lundgren, Andrew; Nitz, Alexander H.
2012-10-01
The detection of gravitational waves from binary neutron stars is a major goal of the gravitational-wave observatories Advanced LIGO and Advanced Virgo. Previous searches for binary neutron stars with LIGO and Virgo neglected the component stars’ angular momentum (spin). We demonstrate that neglecting spin in matched-filter searches causes advanced detectors to lose more than 3% of the possible signal-to-noise ratio for 59% (6%) of sources, assuming that neutron star dimensionless spins, cJ/GM2, are uniformly distributed with magnitudes between 0 and 0.4 (0.05) and that the neutron stars have isotropically distributed spin orientations. We present a new method for constructing template banks for gravitational-wave searches for systems with spin. We present a new metric in a parameter space in which the template placement metric is globally flat. This new method can create template banks of signals with nonzero spins that are (anti-)aligned with the orbital angular momentum. We show that this search loses more than 3% of the maximum signal-to-noise for only 9% (0.2%) of binary neutron star sources with dimensionless spins between 0 and 0.4 (0.05) and isotropic spin orientations. Use of this template bank will prevent selection bias in gravitational-wave searches and allow a more accurate exploration of the distribution of spins in binary neutron stars.
C.G., Ellis; S., Milkovich; D., Goldman
2012-01-01
Erythrocytes appear to be ideal sensors for regulating microvascular O2 supply since they release the potent vasodilator adenosine 5′-triphosphate (ATP) in an O2 saturation dependent manner. Whether erythrocytes play a significant role in regulating O2 supply in the complex environment of diffusional O2 exchange among capillaries, arterioles and venules, depends on the efficiency with which erythrocytes signal the vascular endothelium. If one assumes that the distribution of purinergic receptors is uniform throughout the microvasculature, then the most efficient site for signaling should occur in capillaries, where the erythrocyte membrane is in close proximity to the endothelium. ATP released from erythrocytes would diffuse a short distance to P2y receptors inducing an increase in blood flow possibly the result of endothelial hyperpolarization. We hypothesize that this hyperpolarization varies across the capillary bed dependent upon erythrocyte supply rate and the flux of O2 from these erythrocytes to support O2 metabolism. This would suggest that the capillary bed would be the most effective site for erythrocytes to communicate tissue oxygen needs. Electrically coupled endothelial cells conduct the integrated signal upstream where arterioles adjust vascular resistance, thus enabling ATP released from erythrocytes to regulate the magnitude and distribution of O2 supply to individual capillary networks. PMID:22587367
NASA Astrophysics Data System (ADS)
Bansal, A. R.; Anand, S.; Rajaram, M.; Rao, V.; Dimri, V. P.
2012-12-01
The depth to the bottom of the magnetic sources (DBMS) may be used as an estimate of the Curie - point depth. The DBMSs can also be interpreted in term of thermal structure of the crust. The thermal structure of the crust is a sensitive parameter and depends on the many properties of crust e.g. modes of deformation, depths of brittle and ductile deformation zones, regional heat flow variations, seismicity, subsidence/uplift patterns and maturity of organic matter in sedimentary basins. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on fractal distribution has been proposed. We applied this modified centroid method to the aeromagnetic data of the central Indian region and selected 29 half overlapping blocks of dimension 200 km x 200 km covering different parts of the central India. Shallower values of the DBMS are found for the western and southern portion of Indian shield. The DBMSs values are found as low as close to middle crust in the south west Deccan trap and probably deeper than Moho in the Chhatisgarh basin. In few places DBMS are close to the Moho depth found from the seismic study and others places shallower than the Moho. The DBMS indicate complex nature of the Indian crust.
Colombia: A Country Under Constant Threat of Disasters
2014-05-22
disasters strike every nation in the world , and although these events do not occur with uniformity of distribution, developing nations suffer the greatest...strike every nation in the world , and although these events do not occur with uniformity of distribution, developing nations suffer the greatest...have been victims 4IHS Janes, “Jane’s World Insurgency and Terrorism.” Fuerzas Armadas
Towards a Logical Distinction Between Swarms and Aftershock Sequences
NASA Astrophysics Data System (ADS)
Gardine, M.; Burris, L.; McNutt, S.
2007-12-01
The distinction between swarms and aftershock sequences has, up to this point, been fairly arbitrary and non- uniform. Typically 0.5 to 1 order of magnitude difference between the mainshock and largest aftershock has been a traditional choice, but there are many exceptions. Seismologists have generally assumed that the mainshock carries most of the energy, but this is only true if it is sufficiently large compared to the size and numbers of aftershocks. Here we present a systematic division based on energy of the aftershock sequence compared to the energy of the largest event of the sequence. It is possible to calculate the amount of aftershock energy assumed to be in the sequence using the b-value of the frequency-magnitude relation with a fixed choice of magnitude separation (M-mainshock minus M-largest aftershock). Assuming that the energy of an aftershock sequence is less than the energy of the mainshock, the b-value at which the aftershock energy exceeds that of the mainshock energy determines the boundary between aftershock sequences and swarms. The amount of energy for various choices of b-value is also calculated using different values of magnitude separation. When the minimum b-value at which the sequence energy exceeds that of the largest event/mainshock is plotted against the magnitude separation, a linear trend emerges. Values plotting above this line represent swarms and values plotting below it represent aftershock sequences. This scheme has the advantage that it represents a physical quantity - energy - rather than only statistical features of earthquake distributions. As such it may be useful to help distinguish swarms from mainshock/aftershock sequences and to better determine the underlying causes of earthquake swarms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Currie, Thayne; Cloutier, Ryan; Jayawardhana, Ray
2014-11-10
We present new L' (3.8 μm) and Brα (4.05 μm) data and reprocessed archival L' data for the young, planet-hosting star HR 8799 obtained with Keck/NIRC2, VLT/NaCo, and Subaru/IRCS. We detect all four HR 8799 planets in each data set at a moderate to high signal-to-noise ratio (S/N ≳ 6-15). We fail to identify a fifth planet, 'HR 8799 f', at r < 15 AU at a 5σ confidence level: one suggestive, marginally significant residual at 0.''2 is most likely a point-spread function artifact. Assuming companion ages of 30 Myr and the Baraffe planet cooling models, we rule out anmore » HR 8799 f with a mass of 5 M{sub J} (7 M{sub J} ), 7 M{sub J} (10 M{sub J} ), or 12 M{sub J} (13 M{sub J} ) at r {sub proj} ∼ 12 AU, 9 AU, and 5 AU, respectively. All four HR 8799 planets have red early T dwarf-like L' – [4.05] colors, suggesting that their spectral energy distributions peak in between the L' and M' broadband filters. We find no statistically significant difference in HR 8799 cde's color. Atmosphere models assuming thick, patchy clouds appear to better match HR 8799 bcde's photometry than models assuming a uniform cloud layer. While non-equilibrium carbon chemistry is required to explain HR 8799 b and c's photometry/spectra, evidence for it from HR 8799 d and e's photometry is weaker. Future, deep-IR spectroscopy/spectrophotometry with the Gemini Planet Imager, SCExAO/CHARIS, and other facilities may clarify whether the planets are chemically similar or heterogeneous.« less
Hsu, Ya-Chu; Hung, Yu-Chen; Wang, Chiu-Yen
2017-09-15
High uniformity Au-catalyzed indium selenide (In 2 Se 3) nanowires are grown with the rapid thermal annealing (RTA) treatment via the vapor-liquid-solid (VLS) mechanism. The diameters of Au-catalyzed In 2 Se 3 nanowires could be controlled with varied thicknesses of Au films, and the uniformity of nanowires is improved via a fast pre-annealing rate, 100 °C/s. Comparing with the slower heating rate, 0.1 °C/s, the average diameters and distributions (standard deviation, SD) of In 2 Se 3 nanowires with and without the RTA process are 97.14 ± 22.95 nm (23.63%) and 119.06 ± 48.75 nm (40.95%), respectively. The in situ annealing TEM is used to study the effect of heating rate on the formation of Au nanoparticles from the as-deposited Au film. The results demonstrate that the average diameters and distributions of Au nanoparticles with and without the RTA process are 19.84 ± 5.96 nm (30.00%) and about 22.06 ± 9.00 nm (40.80%), respectively. It proves that the diameter size, distribution, and uniformity of Au-catalyzed In 2 Se 3 nanowires are reduced and improved via the RTA pre-treated. The systemic study could help to control the size distribution of other nanomaterials through tuning the annealing rate, temperatures of precursor, and growth substrate to control the size distribution of other nanomaterials. Graphical Abstract Rapid thermal annealing (RTA) process proved that it can uniform the size distribution of Au nanoparticles, and then it can be used to grow the high uniformity Au-catalyzed In 2 Se 3 nanowires via the vapor-liquid-solid (VLS) mechanism. Comparing with the general growth condition, the heating rate is slow, 0.1 °C/s, and the growth temperature is a relatively high growth temperature, > 650 °C. RTA pre-treated growth substrate can form smaller and uniform Au nanoparticles to react with the In 2 Se 3 vapor and produce the high uniformity In 2 Se 3 nanowires. The in situ annealing TEM is used to realize the effect of heating rate on Au nanoparticle formation from the as-deposited Au film. The byproduct of self-catalyzed In 2 Se 3 nanoplates can be inhibited by lowering the precursors and growth temperatures.
Washington State School Finance, 1999: A Special Focus on Teacher Salaries.
ERIC Educational Resources Information Center
Plecki, Margaret L.
This paper provides current information about the funding of Washington's K-12 school finance system. Schools in Washington State derive most of their revenues from state sources. In response to a 1977 court ruling, 'Seattle v. State of Washington', the state assumed responsibility for funding "basic education" for a "uniform system…
Testing for OO-Faithfulness in the Acquisition of Consonant Clusters
ERIC Educational Resources Information Center
Tessier, Anne-Michelle
2012-01-01
This article provides experimental evidence for the claim in Hayes (2004) and McCarthy (1998) that language learners are biased to assume that morphological paradigms should be phonologically-uniform--that is, that derived words should retain all the phonological properties of their bases. The evidence comes from an artificial language…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-28
..., Transformer Housings, Junction Boxes, and Accessories Airport Design'' Advisory Circular, AC 150/5345-42G. The... necessary to carry out this subchapter and regulations to be assumed by the sponsor. Uniform design... A307-A) per Engineering Brief (EB) 83, In-Pavement Light Fixture Bolts is introduced where applicable...
Mobility and Academic Literacies: An Epistolary Conversation
ERIC Educational Resources Information Center
Blommaert, Jan; Horner, Bruce
2017-01-01
This article explores the implications of a mobilities perspective for the conceptualization, teaching, and study of academic literacies. Mobility has come to serve as a catalyst for rethinking scholarly work in a variety of fields--most provocatively, the assumed stability as well as uniformity of what is studied and the location and products of…
DOE R&D Accomplishments Database
Wigner, E. P.; Wilkins, J. E. Jr.
1944-09-14
In this paper we set up an integral equation governing the energy distribution of neutrons that are being slowed down uniformly throughout the entire space by a uniformly distributed moderator whose atoms are in motion with a Maxwellian distribution of velocities. The effects of chemical binding and crystal reflection are ignored. When the moderator is hydrogen, the integral equation is reduced to a differential equation and solved by numerical methods. In this manner we obtain a refinement of the dv/v{sup 2} law. (auth)
Characterization of Dispersive Ultrasonic Rayleigh Surface Waves in Asphalt Concrete
NASA Astrophysics Data System (ADS)
In, Chi-Won; Kim, Jin-Yeon; Jacobs, Laurence J.; Kurtis, Kimberly E.
2008-02-01
This research focuses on the application of ultrasonic Rayleigh surface waves to nondestructively characterize the mechanical properties and structural defects (non-uniformly distributed aggregate) in asphalt concrete. An efficient wedge technique is developed in this study to generate Rayleigh surface waves that is shown to be effective in characterizing Rayleigh waves in this highly viscoelastic (attenuating) and heterogeneous medium. Experiments are performed on an asphalt-concrete beam produced with uniformly distributed aggregate. Ultrasonic techniques using both contact and non-contact sensors are examined and their results are compared. Experimental results show that the wedge technique along with an air-coupled sensor appears to be effective in characterizing Rayleigh waves in asphalt concrete. Hence, measurement of theses material properties needs to be investigated in non-uniformly distributed aggregate material using these techniques.
Prideaux, Andrew R.; Song, Hong; Hobbs, Robert F.; He, Bin; Frey, Eric C.; Ladenson, Paul W.; Wahl, Richard L.; Sgouros, George
2010-01-01
Phantom-based and patient-specific imaging-based dosimetry methodologies have traditionally yielded mean organ-absorbed doses or spatial dose distributions over tumors and normal organs. In this work, radiobiologic modeling is introduced to convert the spatial distribution of absorbed dose into biologically effective dose and equivalent uniform dose parameters. The methodology is illustrated using data from a thyroid cancer patient treated with radioiodine. Methods Three registered SPECT/CT scans were used to generate 3-dimensional images of radionuclide kinetics (clearance rate) and cumulated activity. The cumulated activity image and corresponding CT scan were provided as input into an EGSnrc-based Monte Carlo calculation: The cumulated activity image was used to define the distribution of decays, and an attenuation image derived from CT was used to define the corresponding spatial tissue density and composition distribution. The rate images were used to convert the spatial absorbed dose distribution to a biologically effective dose distribution, which was then used to estimate a single equivalent uniform dose for segmented volumes of interest. Equivalent uniform dose was also calculated from the absorbed dose distribution directly. Results We validate the method using simple models; compare the dose-volume histogram with a previously analyzed clinical case; and give the mean absorbed dose, mean biologically effective dose, and equivalent uniform dose for an illustrative case of a pediatric thyroid cancer patient with diffuse lung metastases. The mean absorbed dose, mean biologically effective dose, and equivalent uniform dose for the tumor were 57.7, 58.5, and 25.0 Gy, respectively. Corresponding values for normal lung tissue were 9.5, 9.8, and 8.3 Gy, respectively. Conclusion The analysis demonstrates the impact of radiobiologic modeling on response prediction. The 57% reduction in the equivalent dose value for the tumor reflects a high level of dose nonuniformity in the tumor and a corresponding reduced likelihood of achieving a tumor response. Such analyses are expected to be useful in treatment planning for radionuclide therapy. PMID:17504874
DOSE ASSESSMENT OF THE FINAL INVENTORIES IN CENTER SLIT TRENCHES ONE THROUGH FIVE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collard, L.; Hamm, L.; Smith, F.
2011-05-02
In response to a request from Solid Waste Management (SWM), this study evaluates the performance of waste disposed in Slit Trenches 1-5 by calculating exposure doses and concentrations. As of 8/19/2010, Slit Trenches 1-5 have been filled and are closed to future waste disposal in support of an ARRA-funded interim operational cover project. Slit Trenches 6 and 7 are currently in operation and are not addressed within this analysis. Their current inventory limits are based on the 2008 SA and are not being impacted by this study. This analysis considers the location and the timing of waste disposal in Slitmore » Trenches 1-5 throughout their operational life. In addition, the following improvements to the modeling approach have been incorporated into this analysis: (1) Final waste inventories from WITS are used for the base case analysis where variance in the reported final disposal inventories is addressed through a sensitivity analysis; (2) Updated K{sub d} values are used; (3) Area percentages of non-crushable containers are used in the analysis to determine expected infiltration flows for cases that consider collapse of these containers; (4) An updated representation of ETF carbon column vessels disposed in SLIT3-Unit F is used. Preliminary analyses indicated a problem meeting the groundwater beta-gamma dose limit because of high H-3 and I-129 release from the ETF vessels. The updated model uses results from a recent structural analysis of the ETF vessels indicating that water does not penetrate the vessels for about 130 years and that the vessels remain structurally intact throughout the 1130-year period of assessment; and (5) Operational covers are included with revised installation dates and sets of Slit Trenches that have a common cover. With the exception of the modeling enhancements noted above, the analysis follows the same methodology used in the 2008 PA (WSRC, 2008) and the 2008 SA (Collard and Hamm, 2008). Infiltration flows through the vadose zone are identical to the flows used in the 2008 PA, except for flows during the operational cover time period. The physical (i.e., non-geochemical) models of the vadose zone and aquifer are identical in most cases to the models used in the 2008 PA. However, the 2008 PA assumed a uniform distribution of waste within each Slit Trench (WITS Location) and assumed that the entire inventory of each trench was disposed of at the time the first Slit Trench was opened. The current analysis considers individual trench excavations (i.e., segments) and groups of segments (i.e., Inventory Groups also known as WITS Units) within Slit Trenches. Waste disposal is assumed to be spatially uniform in each Inventory Group and is distributed in time increments of six months or less between the time the Inventory Group was opened and closed.« less
Enhancement of viability of muscle precursor cells on 3D scaffold in a perfusion bioreactor.
Cimetta, E; Flaibani, M; Mella, M; Serena, E; Boldrin, L; De Coppi, P; Elvassore, N
2007-05-01
The aim of this study was to develop a methodology for the in vitro expansion of skeletal-muscle precursor cells (SMPC) in a three-dimensional (3D) environment in order to fabricate a cellularized artificial graft characterized by high density of viable cells and uniform cell distribution over the entire 3D domain. Cell seeding and culture within 3D porous scaffolds by conventional static techniques can lead to a uniform cell distribution only on the scaffold surface, whereas dynamic culture systems have the potential of allowing a uniform growth of SMPCs within the entire scaffold structure. In this work, we designed and developed a perfusion bioreactor able to ensure long-term culture conditions and uniform flow of medium through 3D collagen sponges. A mathematical model to assist the design of the experimental setup and of the operative conditions was developed. The effects of dynamic vs static culture in terms of cell viability and spatial distribution within 3D collagen scaffolds were evaluated at 1, 4 and 7 days and for different flow rates of 1, 2, 3.5 and 4.5 ml/min using C2C12 muscle cell line and SMPCs derived from satellite cells. C2C12 cells, after 7 days of culture in our bioreactor, perfused applying a 3.5 ml/min flow rate, showed a higher viability resulting in a three-fold increase when compared with the same parameter evaluated for cultures kept under static conditions. In addition, dynamic culture resulted in a more uniform 3D cell distribution. The 3.5 ml/min flow rate in the bioreactor was also applied to satellite cell-derived SMPCs cultured on 3D collagen scaffolds. The dynamic culture conditions improved cell viability leading to higher cell density and uniform distribution throughout the entire 3D collagen sponge for both C2C12 and satellite cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malinouski, M.; Kehr, S.; Finney, L.
2012-04-17
Recent advances in quantitative methods and sensitive imaging techniques of trace elements provide opportunities to uncover and explain their biological roles. In particular, the distribution of selenium in tissues and cells under both physiological and pathological conditions remains unknown. In this work, we applied high-resolution synchrotron X-ray fluorescence microscopy (XFM) to map selenium distribution in mouse liver and kidney. Liver showed a uniform selenium distribution that was dependent on selenocysteine tRNA{sup [Ser]Sec} and dietary selenium. In contrast, kidney selenium had both uniformly distributed and highly localized components, the latter visualized as thin circular structures surrounding proximal tubules. Other parts ofmore » the kidney, such as glomeruli and distal tubules, only manifested the uniformly distributed selenium pattern that co-localized with sulfur. We found that proximal tubule selenium localized to the basement membrane. It was preserved in Selenoprotein P knockout mice, but was completely eliminated in glutathione peroxidase 3 (GPx3) knockout mice, indicating that this selenium represented GPx3. We further imaged kidneys of another model organism, the naked mole rat, which showed a diminished uniformly distributed selenium pool, but preserved the circular proximal tubule signal. We applied XFM to image selenium in mammalian tissues and identified a highly localized pool of this trace element at the basement membrane of kidneys that was associated with GPx3. XFM allowed us to define and explain the tissue topography of selenium in mammalian kidneys at submicron resolution.« less
Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression
NASA Astrophysics Data System (ADS)
Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli
2018-06-01
Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.
Formation and evolution of bubbly screens in confined oscillating bubbly liquids.
Shklyaev, Sergey; Straube, Arthur V
2010-01-01
We consider the dynamics of dilute monodisperse bubbly liquid confined by two plane solid walls and subject to small-amplitude high-frequency oscillations normal to the walls. The initial state corresponds to the uniform distribution of bubbles and motionless liquid. The period of external driving is assumed much smaller than typical relaxation times for a single bubble but larger than the period of volume eigenoscillations. The time-averaged description accounting for the two-way coupling between the liquid and the bubbles is applied. We show that the model predicts accumulation of bubbles in thin sheets parallel to the walls. These singular structures, which are formally characterized by infinitely thin width and infinitely high concentration, are referred to as bubbly screens. The formation of a bubbly screen is described analytically in terms of a self-similar solution, which is in agreement with numerical simulations. We study the evolution of bubbly screens and detect a one-dimensional stationary state, which is shown to be unconditionally unstable.
Factors affecting the sticking of insects on modified aircraft wings
NASA Technical Reports Server (NTRS)
Yi, O.; Chitsaz-Z, M. R.; Eiss, N. S.; Wightman, J. P.
1988-01-01
Previous work showed that the total number of insects sticking to an aluminum surface was reduced by coating the aluminum surface with elastomers. Due to a large number of possible experimental errors, no correlation between the modulus of elasticity, the elastomer, and the total number of insects sticking to a given elastomer was obtained. One of the errors assumed to be introduced during the road test is a variable insect flux so the number of insects striking one surface might be different from that striking another sample. To eliminate this source of error, the road test used to collect insects was simulated in a laboratory by development of an insect impacting technique using a pipe and high pressure compressed air. The insects are accelerated by a compressed air gun to high velocities and are then impacted with a stationary target on which the sample is mounted. The velocity of an object exiting from the pipe was determined and further improvement of the technique was achieved to obtain a uniform air velocity distribution.
Formation and evolution of bubbly screens in confined oscillating bubbly liquids
NASA Astrophysics Data System (ADS)
Shklyaev, Sergey; Straube, Arthur V.
2010-01-01
We consider the dynamics of dilute monodisperse bubbly liquid confined by two plane solid walls and subject to small-amplitude high-frequency oscillations normal to the walls. The initial state corresponds to the uniform distribution of bubbles and motionless liquid. The period of external driving is assumed much smaller than typical relaxation times for a single bubble but larger than the period of volume eigenoscillations. The time-averaged description accounting for the two-way coupling between the liquid and the bubbles is applied. We show that the model predicts accumulation of bubbles in thin sheets parallel to the walls. These singular structures, which are formally characterized by infinitely thin width and infinitely high concentration, are referred to as bubbly screens. The formation of a bubbly screen is described analytically in terms of a self-similar solution, which is in agreement with numerical simulations. We study the evolution of bubbly screens and detect a one-dimensional stationary state, which is shown to be unconditionally unstable.
New formulation of the discrete element method
NASA Astrophysics Data System (ADS)
Rojek, Jerzy; Zubelewicz, Aleksander; Madan, Nikhil; Nosewicz, Szymon
2018-01-01
A new original formulation of the discrete element method based on the soft contact approach is presented in this work. The standard DEM has heen enhanced by the introduction of the additional (global) deformation mode caused by the stresses in the particles induced by the contact forces. Uniform stresses and strains are assumed for each particle. The stresses are calculated from the contact forces. The strains are obtained using an inverse constitutive relationship. The strains allow us to obtain deformed particle shapes. The deformed shapes (ellipses) are taken into account in contact detection and evaluation of the contact forces. A simple example of a uniaxial compression of a rectangular specimen, discreti.zed with equal sized particles is simulated to verify the DDEM algorithm. The numerical example shows that a particle deformation changes the particle interaction and the distribution of forces in the discrete element assembly. A quantitative study of micro-macro elastic properties proves the enhanced capabilities of the DDEM as compared to standard DEM.
[CLINICAL AND EPIDEMIOLOGICAL PECULIARITIES OF CYSTIC ECHINOCOCCOSIS IN CHILDREN].
Melia, Kh; Kokaia, N; Manjgaladze, M; Kelbakiani-Kvinikhidze, T; Sulaberidze, G
2017-04-01
The postoperative period of cystic echinococcosis was studied in 13 children. Demographic, epidemiological, clinical diagnosis, treatment, number location, and development of cysts and serologic data were analyzed. Age of children at diagnosis range 5 to 17 years. All patients with cystic echinococcosis had abdominal cysts. The liver was the main organ involved in ten patients (76,9%) - they had cysts located in the liver, two patients (15,4%) had lung cyst, one patient had concomitant lung and liver cysts. Twelve patients had single cysts and one had more than one abdominal cysts. Surgical treatment was performed in 23,1% cases. Ultrasound studies (US) were performed during the monitoring period. Evaluation of cysts was assessed by monitoring US changes. Positive dynamics was revealed in all patients; relapse of the disease was not noticed. Proceeding from the fact that in all patients echoarchitectonics of the hepatic tissue was lumped with a non-uniform structure and uneven ultrasound distribution, it is assumed that these changes are indicative of the development of connective tissue in the liver.
Elliptical storm cell modeling of digital radar data
NASA Technical Reports Server (NTRS)
Altman, F. J.
1972-01-01
A model for spatial distributions of reflectivity in storm cells was fitted to digital radar data. The data were taken with a modified WSR-57 weather radar with 2.6-km resolution. The data consisted of modified B-scan records on magnetic tape of storm cells tracked at 0 deg elevation for several hours. The MIT L-band radar with 0.8-km resolution produced cross-section data on several cells at 1/2 deg elevation intervals. The model developed uses ellipses for contours of constant effective-reflectivity factor Z with constant orientation and eccentricity within a horizontal cell cross section at a given time and elevation. The centers of the ellipses are assumed to be uniformly spaced on a straight line, with areas linearly related to log Z. All cross sections are similar at different heights (except for cell tops, bottoms, and splitting cells), especially for the highest reflectivities; wind shear causes some translation and rotation between levels. Goodness-of-fit measures and parameters of interest for 204 ellipses are considered.
NASA Astrophysics Data System (ADS)
Stukan, M. R.; Boek, E. S.; Padding, J. T.; Crawshaw, J. P.
2008-05-01
Viscoelastic wormlike micelles are formed by surfactants assembling into elongated cylindrical structures. These structures respond to flow by aligning, breaking and reforming. Their response to the complex flow fields encountered in porous media is particularly rich. Here we use a realistic mesoscopic Brownian Dynamics model to investigate the flow of a viscoelastic surfactant (VES) fluid through individual pores idealized as a step expansion-contraction of size around one micron. In a previous study, we assumed the flow field to be Newtonian. Here we extend the work to include the non-Newtonian flow field previously obtained by experiment. The size of the simulations is also increased so that the pore is much larger than the radius of gyration of the micelles. For the non-Newtonian flow field at the higher flow rates in relatively large pores, the density of the micelles becomes markedly non-uniform. In this case, we find that the density in the large, slowly moving entry corner regions is substantially increased.
Hastings, A.; Hom, C. L.
1989-01-01
We demonstrate that, in a model incorporating weak Gaussian stabilizing selection on n additively determined characters, at most n loci are polymorphic at a stable equilibrium. The number of characters is defined to be the number of independent components in the Gaussian selection scheme. We also assume linkage equilibrium, and that either the number of loci is large enough that the phenotypic distribution in the population can be approximated as multivariate Gaussian or that selection is weak enough that the mean fitness of the population can be approximated using only the mean and the variance of the characters in the population. Our results appear to rule out antagonistic pleiotropy without epistasis as a major force in maintaining additive genetic variation in a uniform environment. However, they are consistent with the maintenance of variability by genotype-environment interaction if a trait in different environments corresponds to different characters and the number of different environments exceeds the number of polymorphic loci that affect the trait. PMID:2767424
The effect of suspended particles on Jean's criterion for gravitational instability
NASA Technical Reports Server (NTRS)
Wollkind, David J.; Yates, Kemble R.
1990-01-01
The effect that the proper inclusion of suspended particles has on Jeans' criterion for the self-gravitational instability of an unbounded nonrotating adiabatic gas cloud is examined by formulating the appropriate model system, introducing particular physically plausible equations of state and constitutive relations, performing a linear stability analysis of a uniformly expanding exact solution to these governing equations, and exploiting the fact that there exists a natural small material parameter for this problem given by N sub 1/n sub 1, the ratio of the initial number density for the particles to that for the gas. The main result of this investigation is the derivation of an altered criterion which can substantially reduce Jeans' original critical wavelength for instability. It is then shown that the existing discrepancy between Jeans' theoretical prediction using and actual observational data relevant to the Andromeda nebula M31 can be accounted for by this new criterion of assuming suspended particles of a reasonable grain size and distribution to be present.
Early events in speciation: Cryptic species of Drosophila aldrichi.
Castro Vargas, Cynthia; Richmond, Maxi Polihronakis; Ramirez Loustalot Laclette, Mariana; Markow, Therese Ann
2017-06-01
Understanding the earliest events in speciation remains a major challenge in evolutionary biology. Thus identifying species whose populations are beginning to diverge can provide useful systems to study the process of speciation. Drosophila aldrichi , a cactophilic fruit fly species with a broad distribution in North America, has long been assumed to be a single species owing to its morphological uniformity. While previous reports either of genetic divergence or reproductive isolation among different D. aldrichi strains have hinted at the existence of cryptic species, the evolutionary relationships of this species across its range have not been thoroughly investigated. Here we show that D. aldrichi actually is paraphyletic with respect to its closest relative, Drosophila wheeleri , and that divergent D. aldrichi lineages show complete hybrid male sterility when crossed. Our data support the interpretation that there are at least two species of D. aldrichi, making these flies particularly attractive for studies of speciation in an ecological and geographical context.
Measuring and Modeling Sonoporation Dynamics in Mammalian Cells via Calcium Imaging
NASA Astrophysics Data System (ADS)
Kumon, R. E.; Parikh, P.; Sabens, D.; Aehle, M.; Kourennyi, D.; Deng, C. X.
2007-05-01
In this study, calcium imaging via the fluorescent indicator Fura-2 is used to characterize the sonoporation of Chinese Hamster Ovarian (CHO) cells in the presence of Optison™ microbubbles. Evolution of the calcium concentration within cells is determined from real-time fluorescence intensity measurements before, during, and after exposure to a 1 MHz ultrasound tone burst (0.2 s, 0.45 MPa). To relate microscopic sonoporation parameters to the measurements, an analytical model that includes sonoporation and plasma membrane transport is developed, assuming rapid mixing (uniform spatial distribution) in the cell. Fitting the measured data to the model provides estimated values for the poration area as a function of poration relaxation rate as well as plasma membrane pump and leakage rates. A modified compartment model that includes the effects of sonoporation, buffering proteins, and transport across the plasma membrane, endoplasmic reticulum, and mitochondria is also investigated. Numerical 3solutions of this model show a variety of behaviors for the calcium dynamics of the cell.
Yield modeling of acoustic charge transport transversal filters
NASA Technical Reports Server (NTRS)
Kenney, J. S.; May, G. S.; Hunt, W. D.
1995-01-01
This paper presents a yield model for acoustic charge transport transversal filters. This model differs from previous IC yield models in that it does not assume that individual failures of the nondestructive sensing taps necessarily cause a device failure. A redundancy in the number of taps included in the design is explained. Poisson statistics are used to describe the tap failures, weighted over a uniform defect density distribution. A representative design example is presented. The minimum number of taps needed to realize the filter is calculated, and tap weights for various numbers of redundant taps are calculated. The critical area for device failure is calculated for each level of redundancy. Yield is predicted for a range of defect densities and redundancies. To verify the model, a Monte Carlo simulation is performed on an equivalent circuit model of the device. The results of the yield model are then compared to the Monte Carlo simulation. Better than 95% agreement was obtained for the Poisson model with redundant taps ranging from 30% to 150% over the minimum.
THE STRUCTURE OF THE LOCAL HOT BUBBLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W.; Galeazzi, M.; Uprety, Y.
Diffuse X-rays from the Local Galaxy ( DXL ) is a sounding rocket mission designed to quantify and characterize the contribution of Solar Wind Charge eXchange (SWCX) to the Diffuse X-ray Background and study the properties of the Local Hot Bubble (LHB). Based on the results from the DXL mission, we quantified and removed the contribution of SWCX to the diffuse X-ray background measured by the ROSAT All Sky Survey. The “cleaned” maps were used to investigate the physical properties of the LHB. Assuming thermal ionization equilibrium, we measured a highly uniform temperature distributed around kT = 0.097 keV ± 0.013 keV (FWHM) ± 0.006more » keV (systematic). We also generated a thermal emission measure map and used it to characterize the three-dimensional (3D) structure of the LHB, which we found to be in good agreement with the structure of the local cavity measured from dust and gas.« less
Testing Small CPAS Parachutes Using HIVAS
NASA Technical Reports Server (NTRS)
Ray, Eric S.; Hennings, Elsa; Bernatovich, Michael A.
2013-01-01
The High Velocity Airflow System (HIVAS) facility at the Naval Air Warfare Center (NAWC) at China Lake was successfully used as an alternative to flight test to determine parachute drag performance of two small Capsule Parachute Assembly System (CPAS) canopies. A similar parachute with known performance was also tested as a control. Realtime computations of drag coefficient were unrealistically low. This is because HIVAS produces a non-uniform flow which rapidly decays from a high central core flow. Additional calibration runs were performed to characterize this flow assuming radial symmetry from the centerline. The flow field was used to post-process effective flow velocities at each throttle setting and parachute diameter using the definition of the momentum flux factor. Because one parachute had significant oscillations, additional calculations were required to estimate the projected flow at off-axis angles. The resulting drag data from HIVAS compared favorably to previously estimated parachute performance based on scaled data from analogous CPAS parachutes. The data will improve drag area distributions in the next version of the CPAS Model Memo.
Regional trends in mercury distribution across the Great Lakes states, north central USA
NASA Astrophysics Data System (ADS)
Nater, Edward A.; Grigal, David F.
1992-07-01
CONCENTRATIONS of mercury in the environment are increasing as a result of human activities, notably fossil-fuel burning and incineration of municipal wastes. Increasing levels of mercury in aquatic environments and consequently in fish populations are recognized as a public-health problem1,2. Enhanced mercury concentrations in lake sediments relative to pre-industrial values have also been attributed to anthropogenic pollution. It is generally assumed that atmospheric mercury deposition is dominated by global-scale processes, consequently being regionally uniform. Here, to the contrary, we report a significant gradient in concentrations and total amounts of mercury in organic litter and surface mineral soil along a transect of forested sites across the north central United States from northwestern Minnesota to eastern Michigan. This gradient is accompanied by parallel changes in wet sulphate deposition and human activity along the transect, suggesting that the regional variation in mercury content is due to deposition of anthropogenic mercury, most probably in particulate form.
Testing stress shadowing effects at the South American subduction zone
NASA Astrophysics Data System (ADS)
Roth, F.; Dahm, T.; Hainzl, S.
2017-11-01
The seismic gap hypothesis assumes that a characteristic earthquake is followed by a long period with a reduced occurrence probability for the next large event on the same fault segment, as a consequence of the induced stress shadow. The gap model is commonly accepted by geologists and is often used for time-dependent seismic hazard estimations. However, systematic and rigorous tests to verify the seismic gap model have often failed so far, which might be partially related to limited data and too tight model assumptions. In this study, we relax the assumption of a characteristic size and location of repeating earthquakes and analyse one of the best available data sets, namely the historical record of major earthquakes along a 3000 km long linear segment of the South American subduction zone. To test whether a stress shadow effect is observable, we compiled a comprehensive catalogue of mega-thrust earthquakes along this plate boundary from 1520 to 2015 containing 174 earthquakes with Mw > 6.5. In our new testing approach, we analyse the time span between an earthquake and the last event that ruptured the epicentre location, where we consider the impact of the uncertainties of epicentres and rupture extensions. Assuming uniform boundary conditions along the trench, we compare the distribution of these recurrence times with simple recurrence models. We find that the distribution is in all cases almost exponentially distributed corresponding to a random (Poissonian) process; despite some tendencies for clustering of the Mw ≥ 7 events and a weak quasi-periodicity of the Mw ≥ 8 earthquakes, respectively. To verify whether the absence of a clear stress shadow signal is related to physical assumptions or data uncertainties, we perform simulations of a physics-based stochastic earthquake model considering rate and state-dependent earthquake nucleation, which are adapted to the observations with regard to the number of events, spatial extend, size distribution and involved uncertainties. Our simulations show that the catalogue uncertainties lead to a significant blurring of the theoretically peaked distribution, but the distribution would be still distinguishable from the observed one for Mw ≥ 7 events. However, considering the stress transfer to adjacent fault segments and heterogeneous instead of constant stress drop within the rupture zone can explain the observed recurrence time distribution. We conclude that simplified recurrence models, ignoring the complexity of the underlying physical process, cannot be applied for forecasting the Mw ≥ 7 earthquake occurrence at this plate boundary.
Kistner, Emily O; Muller, Keith E
2004-09-01
Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.
A Comprehensive Theory of Algorithms for Wireless Networks and Mobile Systems
2016-06-08
David Peleg. Nonuniform SINR+Voronoi Diagrams are Effectively Uniform. In Yoram Moses, editor, Distributed Computing: 29th International Symposium...in Computer Science, page 559. Springer, 2014. [16] Erez Kantor, Zvi Lotker, Merav Parter, and David Peleg. Nonuniform sINR+Voronoi dia- grams are...Merav Parter, and David Peleg. Nonuniform SINR+Voronoi diagrams are effectively uniform. In Yoram Moses, editor, Distributed Computing - 29th
ERIC Educational Resources Information Center
Bhattacharyya, Pratip; Chakrabarti, Bikas K.
2008-01-01
We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…
Electrophoretic sample insertion. [device for uniformly distributing samples in flow path
NASA Technical Reports Server (NTRS)
Mccreight, L. R. (Inventor)
1974-01-01
Two conductive screens located in the flow path of an electrophoresis sample separation apparatus are charged electrically. The sample is introduced between the screens, and the charge is sufficient to disperse and hold the samples across the screens. When the charge is terminated, the samples are uniformly distributed in the flow path. Additionally, a first separation by charged properties has been accomplished.
Mirbozorgi, S Abdollah; Bahrami, Hadi; Sawan, Mohamad; Gosselin, Benoit
2016-04-01
This paper presents a novel experimental chamber with uniform wireless power distribution in 3D for enabling long-term biomedical experiments with small freely moving animal subjects. The implemented power transmission chamber prototype is based on arrays of parallel resonators and multicoil inductive links, to form a novel and highly efficient wireless power transmission system. The power transmitter unit includes several identical resonators enclosed in a scalable array of overlapping square coils which are connected in parallel to provide uniform power distribution along x and y. Moreover, the proposed chamber uses two arrays of primary resonators, facing each other, and connected in parallel to achieve uniform power distribution along the z axis. Each surface includes 9 overlapped coils connected in parallel and implemented into two layers of FR4 printed circuit board. The chamber features a natural power localization mechanism, which simplifies its implementation and ease its operation by avoiding the need for active detection and control mechanisms. A single power surface based on the proposed approach can provide a power transfer efficiency (PTE) of 69% and a power delivered to the load (PDL) of 120 mW, for a separation distance of 4 cm, whereas the complete chamber prototype provides a uniform PTE of 59% and a PDL of 100 mW in 3D, everywhere inside the chamber with a size of 27×27×16 cm(3).
NASA Astrophysics Data System (ADS)
Chen, Xiaowei; Wang, Wenping; Wan, Min
2013-12-01
It is essential to calculate magnetic force in the process of studying electromagnetic flat sheet forming. Calculating magnetic force is the basis of analyzing the sheet deformation and optimizing technical parameters. Magnetic force distribution on the sheet can be obtained by numerical simulation of electromagnetic field. In contrast to other computing methods, the method of numerical simulation has some significant advantages, such as higher calculation accuracy, easier using and other advantages. In this paper, in order to study of magnetic force distribution on the small size flat sheet in electromagnetic forming when flat round spiral coil, flat rectangular spiral coil and uniform pressure coil are adopted, the 3D finite element models are established by software ANSYS/EMAG. The magnetic force distribution on the sheet are analyzed when the plane geometries of sheet are equal or less than the coil geometries under fixed discharge impulse. The results showed that when the physical dimensions of sheet are less than the corresponding dimensions of the coil, the variation of induced current channel width on the sheet will cause induced current crowding effect that seriously influence the magnetic force distribution, and the degree of inhomogeneity of magnetic force distribution is increase nearly linearly with the variation of induced current channel width; the small size uniform pressure coil will produce approximately uniform magnetic force distribution on the sheet, but the coil is easy to early failure; the desirable magnetic force distribution can be achieved when the unilateral placed flat rectangular spiral coil is adopted, and this program can be take as preferred one, because the longevity of flat rectangular spiral coil is longer than the working life of small size uniform pressure coil.
Cylindrically distributing optical fiber tip for uniform laser illumination of hollow organs
NASA Astrophysics Data System (ADS)
Buonaccorsi, Giovanni A.; Burke, T.; MacRobert, Alexander J.; Hill, P. D.; Essenpreis, Matthias; Mills, Timothy N.
1993-05-01
To predict the outcome of laser therapy it is important to possess, among other things, an accurate knowledge of the intensity and distribution of the laser light incident on the tissue. For irradiation of the internal surfaces of hollow organs, modified fiber tips can be used to shape the light distribution to best suit the treatment geometry. There exist bulb-tipped optical fibers emitting a uniform isotropic distribution of light suitable for the treatment of organs which approximate a spherical geometry--the bladder, for example. For the treatment of organs approximating a cylindrical geometry--e.g. the oesophagus--an optical fiber tip which emits a uniform cylindrical distribution of light is required. We report on the design, development and testing of such a device, the CLD fiber tip. The device was made from a solid polymethylmethacrylate (PMMA) rod, 27 mm in length and 4 mm in diameter. One end was shaped and 'silvered' to form a mirror which reflected the light emitted from the delivery fiber positioned at the other end of the rod. The shape of the mirror was such that the light fell with uniform intensity on the circumferential surface of the rod. This surface was coated with BaSO4 reflectance paint to couple the light out of the rod and onto the surface of the tissue.
The global impact distribution of Near-Earth objects
NASA Astrophysics Data System (ADS)
Rumpf, Clemens; Lewis, Hugh G.; Atkinson, Peter M.
2016-02-01
Asteroids that could collide with the Earth are listed on the publicly available Near-Earth object (NEO) hazard web sites maintained by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The impact probability distribution of 69 potentially threatening NEOs from these lists that produce 261 dynamically distinct impact instances, or Virtual Impactors (VIs), were calculated using the Asteroid Risk Mitigation and Optimization Research (ARMOR) tool in conjunction with OrbFit. ARMOR projected the impact probability of each VI onto the surface of the Earth as a spatial probability distribution. The projection considers orbit solution accuracy and the global impact probability. The method of ARMOR is introduced and the tool is validated against two asteroid-Earth collision cases with objects 2008 TC3 and 2014 AA. In the analysis, the natural distribution of impact corridors is contrasted against the impact probability distribution to evaluate the distributions' conformity with the uniform impact distribution assumption. The distribution of impact corridors is based on the NEO population and orbital mechanics. The analysis shows that the distribution of impact corridors matches the common assumption of uniform impact distribution and the result extends the evidence base for the uniform assumption from qualitative analysis of historic impact events into the future in a quantitative way. This finding is confirmed in a parallel analysis of impact points belonging to a synthetic population of 10,006 VIs. Taking into account the impact probabilities introduced significant variation into the results and the impact probability distribution, consequently, deviates markedly from uniformity. The concept of impact probabilities is a product of the asteroid observation and orbit determination technique and, thus, represents a man-made component that is largely disconnected from natural processes. It is important to consider impact probabilities because such information represents the best estimate of where an impact might occur.
Kinetic market models with single commodity having price fluctuations
NASA Astrophysics Data System (ADS)
Chatterjee, A.; Chakrabarti, B. K.
2006-12-01
We study here numerically the behavior of an ideal gas like model of markets having only one non-consumable commodity. We investigate the behavior of the steady-state distributions of money, commodity and total wealth, as the dynamics of trading or exchange of money and commodity proceeds, with local (in time) fluctuations in the price of the commodity. These distributions are studied in markets with agents having uniform and random saving factors. The self-organizing features in money distribution are similar to the cases without any commodity (or with consumable commodities), while the commodity distribution shows an exponential decay. The wealth distribution shows interesting behavior: gamma like distribution for uniform saving propensity and has the same power-law tail, as that of the money distribution, for a market with agents having random saving propensity.
Simulated laser fluorosensor signals from subsurface chlorophyll distributions
NASA Technical Reports Server (NTRS)
Venable, D. D.; Khatun, S.; Punjabi, A.; Poole, L.
1986-01-01
A semianalytic Monte Carlo model has been used to simulate laser fluorosensor signals returned from subsurface distributions of chlorophyll. This study assumes the only constituent of the ocean medium is the common coastal zone dinoflagellate Prorocentrum minimum. The concentration is represented by Gaussian distributions in which the location of the distribution maximum and the standard deviation are variable. Most of the qualitative features observed in the fluorescence signal for total chlorophyll concentrations up to 1.0 microg/liter can be accounted for with a simple analytic solution assuming a rectangular chlorophyll distribution function.
Response of a thin airfoil encountering strong density discontinuity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marble, F.E.
1993-12-01
Airfoil theory for unsteady motion has been developed extensively assuming the undisturbed medium to be of uniform density, a restriction accurate for motion in the atmosphere. In some instances, notably for airfoil comprising fan, compressor and turbine blade rows, the undisturbed medium may carry density variations or ``spots``, resulting from non-uniformities in temperature or composition, of a size comparable to the blade chord. This condition exists for turbine blades, immediately downstream of the main burner of a gas turbine engine where the density fluctuations of the order of 50 percent may occur. Disturbances of a somewhat smaller magnitude arise frommore » the ingestion of hot boundary layers into fans, and exhaust into hovercraft. Because these regions of non-uniform density convect with the moving medium, the airfoil experiences a time varying load and moment which the authors calculate.« less
Stress intensity factors for an inclined crack in an orthotropic strip
NASA Technical Reports Server (NTRS)
Delale, F.; Bakirtas, I.; Erdogan, F.
1978-01-01
The elastostatic problem for an infinite orthotropic strip containing a crack is considered. It is assumed that the orthogonal axes of material orthotropy may have an arbitrary angular orientation with respect to the orthogonal axes of geometric symmetry of the uncracked strip. The crack is located along an axis of orthotropy, hence at an arbitrary angle with respect to the sides of the strip. The general problem is formulated in terms of a system of singular integral equations for arbitrary crack surface tractions. As examples Modes I and II stress intensity factors are calculated for the strip having an internal or an edge crack with various lengths and angular orientations. In most calculations uniform tension or uniform bending away from the crack region is used as the external load. Limited results are also given for uniform normal or shear tractions on the crack surface.
Effects of static equilibrium and higher-order nonlinearities on rotor blade stability in hover
NASA Technical Reports Server (NTRS)
Crespodasilva, Marcelo R. M.; Hodges, Dewey H.
1988-01-01
The equilibrium and stability of the coupled elastic lead/lag, flap, and torsion motion of a cantilever rotor blade in hover are addressed, and the influence of several higher-order terms in the equations of motion of the blade is determined for a range of values of collective pitch. The blade is assumed to be untwisted and to have uniform properties along its span. In addition, chordwise offsets between its elastic, tension, mass, and aerodynamic centers are assumed to be negligible for simplicity. The aerodynamic forces acting on the blade are modeled using a quasi-steady, strip-theory approximation.
Lu, Jennifer Q; Yi, Sung Soo
2006-04-25
A monolayer of gold-containing surface micelles has been produced by spin-coating solution micelles formed by the self-assembly of the gold-modified polystyrene-b-poly(2-vinylpyridine) block copolymer in toluene. After oxygen plasma removed the block copolymer template, highly ordered and uniformly sized nanoparticles have been generated. Unlike other published methods that require reduction treatments to form gold nanoparticles in the zero-valent state, these as-synthesized nanoparticles are in form of metallic gold. These gold nanoparticles have been demonstrated to be an excellent catalyst system for growing small-diameter silicon nanowires. The uniformly sized gold nanoparticles have promoted the controllable synthesis of silicon nanowires with a narrow diameter distribution. Because of the ability to form a monolayer of surface micelles with a high degree of order, evenly distributed gold nanoparticles have been produced on a surface. As a result, uniformly distributed, high-density silicon nanowires have been generated. The process described herein is fully compatible with existing semiconductor processing techniques and can be readily integrated into device fabrication.
Filippov, Alexander E; Gorb, Stanislav N
2015-02-06
One of the important problems appearing in experimental realizations of artificial adhesives inspired by gecko foot hair is so-called clusterization. If an artificially produced structure is flexible enough to allow efficient contact with natural rough surfaces, after a few attachment-detachment cycles, the fibres of the structure tend to adhere one to another and form clusters. Normally, such clusters are much larger than original fibres and, because they are less flexible, form much worse adhesive contacts especially with the rough surfaces. Main problem here is that the forces responsible for the clusterization are the same intermolecular forces which attract fibres to fractal surface of the substrate. However, arrays of real gecko setae are much less susceptible to this problem. One of the possible reasons for this is that ends of the seta have more sophisticated non-uniformly distributed three-dimensional structure than that of existing artificial systems. In this paper, we simulated three-dimensional spatial geometry of non-uniformly distributed branches of nanofibres of the setal tip numerically, studied its attachment-detachment dynamics and discussed its advantages versus uniformly distributed geometry.
ERIC Educational Resources Information Center
Yi, Qing; Zhang, Jinming; Chang, Hua-Hua
2006-01-01
Chang and Zhang (2002, 2003) proposed several baseline criteria for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria were obtained from theoretical derivations that assumed uniformly randomized item selection. The current study investigated potential damage caused…
NASA Technical Reports Server (NTRS)
Chukwu, E. N.
1980-01-01
The problem of Lurie is posed for systems described by a functional differential equation of neutral type. Sufficient conditions are obtained for absolute stability for the controlled system if it is assumed that the uncontrolled plant equation is uniformly asymptotically stable. Both the direct and indirect control cases are treated.
ERIC Educational Resources Information Center
Morris, Jerome
2008-01-01
Background/Context: Most narratives of Brown v. Board of Education primarily focus on integrated schooling as the ultimate objective in Black people's quest for quality schooling. Rather than uniformly assuming integration as Black people's ideological model, the push by Black people for quality schooling instead should be viewed within the…
Fermilab | Science | Historic Results
quark since the discovery of the bottom quark at Fermilab through fixed-target experiments in 1977. Both cosmic rays. Researchers previously had assumed that cosmic rays approach the Earth uniformly from random impact the Earth generally come from the direction of active galactic nuclei. Many large galaxies
Bumpy Path into a Profession: What California's Beginning Teachers Experience. Policy Brief 14-2
ERIC Educational Resources Information Center
Koppich, Julia E.; Humphrey, Daniel C.
2014-01-01
In California as elsewhere, state policy anticipates that aspiring teachers will follow a uniform, multistep path into the profession. It assumes they will complete a preparation program and earn a preliminary credential, take a teaching job and be assigned probationary status, complete a two-year induction program (the Beginning Teacher Support…
The Fine-Beam Cathode-Ray Tube and the Observant and Enquiring Student, Part 5.
ERIC Educational Resources Information Center
Webb, John le P.
1984-01-01
Discusses the physics of electromagnetic focussing using an imaginary dialogue between teacher and student. It is assumed that students have been introduced to the underlying theory concerning movement of a charged particle traveling with uniform speed in a magnetic field before seeing a demonstration with the fine-beam cathode-ray tube. (JN)
Pattern optimization of compound optical film for uniformity improvement in liquid-crystal displays
NASA Astrophysics Data System (ADS)
Huang, Bing-Le; Lin, Jin-tang; Ye, Yun; Xu, Sheng; Chen, En-guo; Guo, Tai-Liang
2017-12-01
The density dynamic adjustment algorithm (DDAA) is designed to efficiently promote the uniformity of the integrated backlight module (IBLM) by adjusting the microstructures' distribution on the compound optical film (COF), in which the COF is constructed in the SolidWorks and simulated in the TracePro. In order to demonstrate the universality of the proposed algorithm, the initial distribution is allocated by the Bezier curve instead of an empirical value. Simulation results maintains that the uniformity of the IBLM reaches over 90% only after four rounds. Moreover, the vertical and horizontal full width at half maximum of angular intensity are collimated to 24 deg and 14 deg, respectively. Compared with the current industry requirement, the IBLM has an 85% higher luminance uniformity of the emerging light, which demonstrate the feasibility and universality of the proposed algorithm.
Terawatt x-ray free-electron-laser optimization by transverse electron distribution shaping
Emma, C.; Wu, J.; Fang, K.; ...
2014-11-03
We study the dependence of the peak power of a 1.5 Å Terawatt (TW), tapered x-ray free-electron laser (FEL) on the transverse electron density distribution. Multidimensional optimization schemes for TW hard x-ray free-electron lasers are applied to the cases of transversely uniform and parabolic electron beam distributions and compared to a Gaussian distribution. The optimizations are performed for a 200 m undulator and a resonant wavelength of λ r = 1.5 Å using the fully three-dimensional FEL particle code GENESIS. The study shows that the flatter transverse electron distributions enhance optical guiding in the tapered section of the undulator andmore » increase the maximum radiation power from a maximum of 1.56 TW for a transversely Gaussian beam to 2.26 TW for the parabolic case and 2.63 TW for the uniform case. Spectral data also shows a 30%–70% reduction in energy deposited in the sidebands for the uniform and parabolic beams compared with a Gaussian. An analysis of the transverse coherence of the radiation shows the coherence area to be much larger than the beam spotsize for all three distributions, making coherent diffraction imaging experiments possible.« less
Integrated Joule switches for the control of current dynamics in parallel superconducting strips
NASA Astrophysics Data System (ADS)
Casaburi, A.; Heath, R. M.; Cristiano, R.; Ejrnaes, M.; Zen, N.; Ohkubo, M.; Hadfield, R. H.
2018-06-01
Understanding and harnessing the physics of the dynamic current distribution in parallel superconducting strips holds the key to creating next generation sensors for single molecule and single photon detection. Non-uniformity in the current distribution in parallel superconducting strips leads to low detection efficiency and unstable operation, preventing the scale up to large area sensors. Recent studies indicate that non-uniform current distributions occurring in parallel strips can be understood and modeled in the framework of the generalized London model. Here we build on this important physical insight, investigating an innovative design with integrated superconducting-to-resistive Joule switches to break the superconducting loops between the strips and thus control the current dynamics. Employing precision low temperature nano-optical techniques, we map the uniformity of the current distribution before- and after the resistive strip switching event, confirming the effectiveness of our design. These results provide important insights for the development of next generation large area superconducting strip-based sensors.
Covariant Uniform Acceleration
NASA Astrophysics Data System (ADS)
Friedman, Yaakov; Scarr, Tzvi
2013-04-01
We derive a 4D covariant Relativistic Dynamics Equation. This equation canonically extends the 3D relativistic dynamics equation , where F is the 3D force and p = m0γv is the 3D relativistic momentum. The standard 4D equation is only partially covariant. To achieve full Lorentz covariance, we replace the four-force F by a rank 2 antisymmetric tensor acting on the four-velocity. By taking this tensor to be constant, we obtain a covariant definition of uniformly accelerated motion. This solves a problem of Einstein and Planck. We compute explicit solutions for uniformly accelerated motion. The solutions are divided into four Lorentz-invariant types: null, linear, rotational, and general. For null acceleration, the worldline is cubic in the time. Linear acceleration covariantly extends 1D hyperbolic motion, while rotational acceleration covariantly extends pure rotational motion. We use Generalized Fermi-Walker transport to construct a uniformly accelerated family of inertial frames which are instantaneously comoving to a uniformly accelerated observer. We explain the connection between our approach and that of Mashhoon. We show that our solutions of uniformly accelerated motion have constant acceleration in the comoving frame. Assuming the Weak Hypothesis of Locality, we obtain local spacetime transformations from a uniformly accelerated frame K' to an inertial frame K. The spacetime transformations between two uniformly accelerated frames with the same acceleration are Lorentz. We compute the metric at an arbitrary point of a uniformly accelerated frame. We obtain velocity and acceleration transformations from a uniformly accelerated system K' to an inertial frame K. We introduce the 4D velocity, an adaptation of Horwitz and Piron s notion of "off-shell." We derive the general formula for the time dilation between accelerated clocks. We obtain a formula for the angular velocity of a uniformly accelerated object. Every rest point of K' is uniformly accelerated, and its acceleration is a function of the observer's acceleration and its position. We obtain an interpretation of the Lorentz-Abraham-Dirac equation as an acceleration transformation from K' to K.
Impact of deformed extreme-ultraviolet pellicle in terms of CD uniformity
NASA Astrophysics Data System (ADS)
Kim, In-Seon; Yeung, Michael; Barouch, Eytan; Oh, Hye-Keun
2015-07-01
The usage of the extreme ultraviolet (EUV) pellicle is regarded as the solution for defect control since it can protect the mask from airborne debris. However some obstacles disrupt real-application of the pellicle such as structural weakness, thermal damage and so on. For these reasons, flawless fabrication of the pellicle is impossible. In this paper, we discuss the influence of deformed pellicle in terms of non-uniform intensity distribution and critical dimension (CD) uniformity. It was found that non-uniform intensity distribution is proportional to local tilt angle of pellicle and CD variation was linearly proportional to transmission difference. When we consider the 16 nm line and space pattern with dipole illumination (σc=0.8, σr=0.1, NA=0.33), the transmission difference (max-min) of 0.7 % causes 0.1 nm CD uniformity. Influence of gravity caused deflection to the aerial image is small enough to ignore. CD uniformity is less than 0.1 nm even for the current gap of 2 mm between mask and pellicle. However, heat caused EUV pellicle wrinkle might cause serious image distortion because a wrinkle of EUV pellicle causes a transmission loss variation as well as CD non-uniformity. In conclusion, local angle of a wrinkle, not a period or an amplitude of a wrinkle is a main factor to CD uniformity, and local angle of less than ~270 mrad is needed to achieve 0.1 nm CD uniformity with 16 nm L/S pattern.
Hydrostatic bearings for a turbine fluid flow metering device
Fincke, J.R.
1980-05-02
A rotor assembly fluid metering device has been improved by development of a hydrostatic bearing fluid system which provides bearing fluid at a common pressure to rotor assembly bearing surfaces. The bearing fluid distribution system produces a uniform film of fluid distribution system produces a uniform film of fluid between bearing surfaces and allows rapid replacement of bearing fluid between bearing surfaces, thereby minimizing bearing wear and corrosion.
Development of extended release dosage forms using non-uniform drug distribution techniques.
Huang, Kuo-Kuang; Wang, Da-Peng; Meng, Chung-Ling
2002-05-01
Development of an extended release oral dosage form for nifedipine using the non-uniform drug distribution matrix method was conducted. The process conducted in a fluid bed processing unit was optimized by controlling the concentration gradient of nifedipine in the coating solution and the spray rate applied to the non-pareil beads. The concentration of nifedipine in the coating was controlled by instantaneous dilutions of coating solution with polymer dispersion transported from another reservoir into the coating solution at a controlled rate. The USP dissolution method equipped with paddles at 100 rpm in 0.1 N hydrochloric acid solution maintained at 37 degrees C was used for the evaluation of release rate characteristics. Results indicated that (1) an increase in the ethyl cellulose content in the coated beads decreased the nifedipine release rate, (2) incorporation of water-soluble sucrose into the formulation increased the release rate of nifedipine, and (3) adjustment of the spray coating solution and the transport rate of polymer dispersion could achieve a dosage form with a zero-order release rate. Since zero-order release rate and constant plasma concentration were achieved in this study using the non-uniform drug distribution technique, further studies to determine in vivo/in vitro correlation with various non-uniform drug distribution dosage forms will be conducted.
WE-DE-201-12: Thermal and Dosimetric Properties of a Ferrite-Based Thermo-Brachytherapy Seed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warrell, G; Shvydka, D; Parsai, E I
Purpose: The novel thermo-brachytherapy (TB) seed provides a simple means of adding hyperthermia to LDR prostate permanent implant brachytherapy. The high blood perfusion rate (BPR) within the prostate motivates the use of the ferrite and conductive outer layer design for the seed cores. We describe the results of computational analyses of the thermal properties of this ferrite-based TB seed in modelled patient-specific anatomy, as well as studies of the interseed and scatter (ISA) effect. Methods: The anatomies (including the thermophysical properties of the main tissue types) and seed distributions of 6 prostate patients who had been treated with LDR brachytherapymore » seeds were modelled in the finite element analysis software COMSOL, using ferrite-based TB and additional hyperthermia-only (HT-only) seeds. The resulting temperature distributions were compared to those computed for patient-specific seed distributions, but in uniform anatomy with a constant blood perfusion rate. The ISA effect was quantified in the Monte Carlo software package MCNP5. Results: Compared with temperature distributions calculated in modelled uniform tissue, temperature distributions in the patient-specific anatomy were higher and more heterogeneous. Moreover, the maximum temperature to the rectal wall was typically ∼1 °C greater for patient-specific anatomy than for uniform anatomy. The ISA effect of the TB and HT-only seeds caused a reduction in D90 similar to that found for previously-investigated NiCu-based seeds, but of a slightly smaller magnitude. Conclusion: The differences between temperature distributions computed for uniform and patient-specific anatomy for ferrite-based seeds are significant enough that heterogeneous anatomy should be considered. Both types of modelling indicate that ferrite-based seeds provide sufficiently high and uniform hyperthermia to the prostate, without excessively heating surrounding tissues. The ISA effect of these seeds is slightly less than that for the previously-presented NiCu-based seeds.« less
Shang, Ce; Chaloupka, Frank J; Zahra, Nahleen; Fong, Geoffrey T
2013-01-01
Background The distribution of cigarette prices has rarely been studied and compared under different tax structures. Descriptive evidence on price distributions by countries can shed light on opportunities for tax avoidance and brand switching under different tobacco tax structures, which could impact the effectiveness of increased taxation in reducing smoking. Objective This paper aims to describe the distribution of cigarette prices by countries and to compare these distributions based on the tobacco tax structure in these countries. Methods We employed data for 16 countries taken from the International Tobacco Control Policy Evaluation Project to construct survey-derived cigarette prices for each country. Self-reported prices were weighted by cigarette consumption and described using a comprehensive set of statistics. We then compared these statistics for cigarette prices under different tax structures. In particular, countries of similar income levels and countries that impose similar total excise taxes using different tax structures were paired and compared in mean and variance using a two-sample comparison test. Findings Our investigation illustrates that, compared with specific uniform taxation, other tax structures, such as ad valorem uniform taxation, mixed (a tax system using ad valorem and specific taxes) uniform taxation, and tiered tax structures of specific, ad valorem and mixed taxation tend to have price distributions with greater variability. Countries that rely heavily on ad valorem and tiered taxes also tend to have greater price variability around the median. Among mixed taxation systems, countries that rely more heavily on the ad valorem component tend to have greater price variability than countries that rely more heavily on the specific component. In countries with tiered tax systems, cigarette prices are skewed more towards lower prices than are prices under uniform tax systems. The analyses presented here demonstrate that more opportunities exist for tax avoidance and brand switching when the tax structure departs from a uniform specific tax. PMID:23792324
Shang, Ce; Chaloupka, Frank J; Zahra, Nahleen; Fong, Geoffrey T
2014-03-01
The distribution of cigarette prices has rarely been studied and compared under different tax structures. Descriptive evidence on price distributions by countries can shed light on opportunities for tax avoidance and brand switching under different tobacco tax structures, which could impact the effectiveness of increased taxation in reducing smoking. This paper aims to describe the distribution of cigarette prices by countries and to compare these distributions based on the tobacco tax structure in these countries. We employed data for 16 countries taken from the International Tobacco Control Policy Evaluation Project to construct survey-derived cigarette prices for each country. Self-reported prices were weighted by cigarette consumption and described using a comprehensive set of statistics. We then compared these statistics for cigarette prices under different tax structures. In particular, countries of similar income levels and countries that impose similar total excise taxes using different tax structures were paired and compared in mean and variance using a two-sample comparison test. Our investigation illustrates that, compared with specific uniform taxation, other tax structures, such as ad valorem uniform taxation, mixed (a tax system using ad valorem and specific taxes) uniform taxation, and tiered tax structures of specific, ad valorem and mixed taxation tend to have price distributions with greater variability. Countries that rely heavily on ad valorem and tiered taxes also tend to have greater price variability around the median. Among mixed taxation systems, countries that rely more heavily on the ad valorem component tend to have greater price variability than countries that rely more heavily on the specific component. In countries with tiered tax systems, cigarette prices are skewed more towards lower prices than are prices under uniform tax systems. The analyses presented here demonstrate that more opportunities exist for tax avoidance and brand switching when the tax structure departs from a uniform specific tax.
Accounting for inherent variability of growth in microbial risk assessment.
Marks, H M; Coleman, M E
2005-04-15
Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.
Aneurysm permeability following coil embolization: packing density and coil distribution
Chueh, Ju-Yu; Vedantham, Srinivasan; Wakhloo, Ajay K; Carniato, Sarena L; Puri, Ajit S; Bzura, Conrad; Coffin, Spencer; Bogdanov, Alexei A; Gounis, Matthew J
2015-01-01
Background Rates of durable aneurysm occlusion following coil embolization vary widely, and a better understanding of coil mass mechanics is desired. The goal of this study is to evaluate the impact of packing density and coil uniformity on aneurysm permeability. Methods Aneurysm models were coiled using either Guglielmi detachable coils or Target coils. The permeability was assessed by taking the ratio of microspheres passing through the coil mass to those in the working fluid. Aneurysms containing coil masses were sectioned for image analysis to determine surface area fraction and coil uniformity. Results All aneurysms were coiled to a packing density of at least 27%. Packing density, surface area fraction of the dome and neck, and uniformity of the dome were significantly correlated (p<0.05). Hence, multivariate principal components-based partial least squares regression models were used to predict permeability. Similar loading vectors were obtained for packing and uniformity measures. Coil mass permeability was modeled better with the inclusion of packing and uniformity measures of the dome (r2=0.73) than with packing density alone (r2=0.45). The analysis indicates the importance of including a uniformity measure for coil distribution in the dome along with packing measures. Conclusions A densely packed aneurysm with a high degree of coil mass uniformity will reduce permeability. PMID:25031179
Analytical approach to Eigen-emittance evolution in storage rings
NASA Astrophysics Data System (ADS)
Nash, Boaz
This dissertation develops the subject of beam evolution in storage rings with nearly uncoupled symplectic linear dynamics. Linear coupling and dissipative/diffusive processes are treated perturbatively. The beam distribution is assumed Gaussian and a function of the invariants. The development requires two pieces: the global invariants and the local stochastic processes which change the emittances, or averages of the invariants. A map based perturbation theory is described, providing explicit expressions for the invariants near each linear resonance, where small perturbations can have a large effect. Emittance evolution is determined by the damping and diffusion coefficients. The discussion is divided into the cases of uniform and non-uniform stochasticity, synchrotron radiation an example of the former and intrabeam scattering the latter. For the uniform case, the beam dynamics is captured by a global diffusion coefficent and damping decrement for each eigen-invariant. Explicit expressions for these quantities near coupling resonances are given. In many cases, they are simply related to the uncoupled values. Near a sum resonance, it is found that one of the damping decrements becomes negative, indicating an anti-damping instability. The formalism is applied to a number of examples, including synchrobetatron coupling caused by a crab cavity, a case of current interest where there is concern about operation near half integer betatron tune. In the non-uniform case, the moment evolution is computed directly, which is illustrated through the example of intrabeam scattering. Our approach to intrabeam scattering damping and diffusion has the advantage of not requiring a loosely-defined Coulomb Logarithm. It is found that in some situations there is a small difference between our results and the standard approaches such as Bjorken-Mtingwa, which is illustrated by comparison of the two approaches and with a measurement of Au evolution in RHIC. Finally, in combining IBS with the global invariants some general statements about IBS equilibrium can be made. Specifically, it is emphasized that no such equilibrium is possible in a non-smooth lattice, even below transition. Near enough to a synchrobetatron coupling resonance, it is found that even for a smooth ring, no IBS equilibrium occurs.
CFD simulation of the gas flow in a pulse tube cryocooler with two pulse tubes
NASA Astrophysics Data System (ADS)
Yin, C. L.
2015-12-01
In this paper, in order to instruct the next optimization work, a two-dimension Computational Fluid Dynamics (CFD) model is developed to simulate temperature distribution and velocity distribution of oscillating fluid in the DPTC by individual phase-shifting. It is found that the axial temperature distribution of regenerator is generally uniform and the temperatures near the center at the same cross setion of two pulse tubes are obviously higher than their near wall temperatures. The wall temperature difference about 0-7 K exists between the two pulse tubes. The velocity distribution near the center of the regenerator is uniform and there is obvious injection stream coming at the center of the pulse tubes from the hot end. The formation reason of temperature distribution and velocity distribution is explained.
NASA Astrophysics Data System (ADS)
Monteiro Santos, Fernando A.; Afonso, António R. Andrade; Dupis, André
2007-03-01
Audio-magnetotelluric (AMT) and resistivity (dc) surveys are often used in environmental, hydrological and geothermal evaluation. The separate interpretation of those geophysical data sets assuming two-dimensional models frequently produces ambiguous results. The joint inversion of AMT and dc data is advocated by several authors as an efficient method for reducing the ambiguity inherent to each of those methods. This paper presents results obtained from the two-dimensional joint inversion of dipole-dipole and scalar AMT data acquired in a low enthalpy geothermal field situated in a graben. The joint inverted models show a better definition of shallow and deep structures. The results show that the extension of the benefits using joint inversion depends on the number and spacing of the AMT sites. The models obtained from experimental data display a low resistivity zone (<20 Ω m) in the central part of the graben that was correlated with the geothermal reservoir. The resistivity distribution models were used to estimate the distribution of the porosity in the geothermal reservoir applying two different approaches and considering the clay minerals effect. The results suggest that the maximum porosity of the reservoir is not uniform and might be in the range of 12% to 24%.
NASA Astrophysics Data System (ADS)
Zheng, R.-F.; Wu, T.-H.; Li, X.-Y.; Chen, W.-Q.
2018-06-01
The problem of a penny-shaped crack embedded in an infinite space of transversely isotropic multi-ferroic composite medium is investigated. The crack is assumed to be subjected to uniformly distributed mechanical, electric and magnetic loads applied symmetrically on the upper and lower crack surfaces. The semi-permeable (limited-permeable) electro-magnetic boundary condition is adopted. By virtue of the generalized method of potential theory and the general solutions, the boundary integro-differential equations governing the mode I crack problem, which are of nonlinear nature, are established and solved analytically. Exact and complete coupling magneto-electro-elastic field is obtained in terms of elementary functions. Important parameters in fracture mechanics on the crack plane, e.g., the generalized crack surface displacements, the distributions of generalized stresses at the crack tip, the generalized stress intensity factors and the energy release rate, are explicitly presented. To validate the present solutions, a numerical code by virtue of finite element method is established for 3D crack problems in the framework of magneto-electro-elasticity. To evaluate conveniently the effect of the medium inside the crack, several empirical formulae are developed, based on the numerical results.
Balásházy, Imre; Farkas, Arpád; Madas, Balázs Gergely; Hofmann, Werner
2009-06-01
Cellular hit probabilities of alpha particles emitted by inhaled radon progenies in sensitive bronchial epithelial cell nuclei were simulated at low exposure levels to obtain useful data for the rejection or support of the linear-non-threshold (LNT) hypothesis. In this study, local distributions of deposited inhaled radon progenies in airway bifurcation models were computed at exposure conditions characteristic of homes and uranium mines. Then, maximum local deposition enhancement factors at bronchial airway bifurcations, expressed as the ratio of local to average deposition densities, were determined to characterise the inhomogeneity of deposition and to elucidate their effect on resulting hit probabilities. The results obtained suggest that in the vicinity of the carinal regions of the central airways the probability of multiple hits can be quite high, even at low average doses. Assuming a uniform distribution of activity there are practically no multiple hits and the hit probability as a function of dose exhibits a linear shape in the low dose range. The results are quite the opposite in the case of hot spots revealed by realistic deposition calculations, where practically all cells receive multiple hits and the hit probability as a function of dose is non-linear in the average dose range of 10-100 mGy.
Fragmentation and melting of the seasonal sea ice cover
NASA Astrophysics Data System (ADS)
Feltham, D. L.; Bateson, A.; Schroeder, D.; Ridley, J. K.; Aksenov, Y.
2017-12-01
Recent years have seen a rapid reduction in the summer extent of Arctic sea ice. This trend has implications for navigation, oil exploration, wildlife, and local communities. Furthermore the Arctic sea ice cover impacts the exchange of heat and momentum between the ocean and atmosphere with significant teleconnections across the climate system, particularly mid to low latitudes in the Northern Hemisphere. The treatment of melting and break-up processes of the seasonal sea ice cover within climate models is currently limited. In particular floes are assumed to have a uniform size which does not evolve with time. Observations suggest however that floe sizes can be modelled as truncated power law distributions, with different exponents for smaller and larger floes. This study aims to examine factors controlling the floe size distribution in the seasonal and marginal ice zone. This includes lateral melting, wave induced break-up of floes, and the feedback between floe size and the mixed ocean layer. These results are then used to quantify the proximate mechanisms of seasonal sea ice reduction in a sea ice—ocean mixed layer model. Observations are used to assess and calibrate the model. The impacts of introducing these processes to the model will be discussed and the preliminary results of sensitivity and feedback studies will also be presented.
NASA Astrophysics Data System (ADS)
Ali, A.; Elkington, S. R.; Malaspina, D.
2014-12-01
The Van Allen radiation belts contain highly energetic particles which interact with a variety of plasma and MHD waves. Waves with frequencies in the ULF range are understood to play an important role in loss and acceleration of energetic particles. We are investigating the contributions from perturbations in both the magnetic and the electric fields in driving radial diffusion of charged particles and wish to probe two unanswered questions about ULF wave driven radial transport. First, how important are the fluctuations in the magnetic field compared with the fluctuations in the electric field in driving radial diffusion? Second, how does ULF wave power distribution in azimuth affect radial diffusion? Analytic treatments of the diffusion coefficients generally assume uniform distribution of power in azimuth but in situ measurements suggest otherwise. We present results from a study using the electric and magnetic field measurements from the Van Allen Probes to estimate the radial diffusion coefficients as a function of L and Kp. During the lifetime of the RBSP mission to date, there has been a dearth of solar activity. This compels us to consider Kp as the only time and activity dependent parameter instead of solar wind velocity and pressure.
Malinouski, Mikalai; Kehr, Sebastian; Finney, Lydia; Vogt, Stefan; Carlson, Bradley A.; Seravalli, Javier; Jin, Richard; Handy, Diane E.; Park, Thomas J.; Loscalzo, Joseph; Hatfield, Dolph L.
2012-01-01
Abstract Aim: Recent advances in quantitative methods and sensitive imaging techniques of trace elements provide opportunities to uncover and explain their biological roles. In particular, the distribution of selenium in tissues and cells under both physiological and pathological conditions remains unknown. In this work, we applied high-resolution synchrotron X-ray fluorescence microscopy (XFM) to map selenium distribution in mouse liver and kidney. Results: Liver showed a uniform selenium distribution that was dependent on selenocysteine tRNA[Ser]Sec and dietary selenium. In contrast, kidney selenium had both uniformly distributed and highly localized components, the latter visualized as thin circular structures surrounding proximal tubules. Other parts of the kidney, such as glomeruli and distal tubules, only manifested the uniformly distributed selenium pattern that co-localized with sulfur. We found that proximal tubule selenium localized to the basement membrane. It was preserved in Selenoprotein P knockout mice, but was completely eliminated in glutathione peroxidase 3 (GPx3) knockout mice, indicating that this selenium represented GPx3. We further imaged kidneys of another model organism, the naked mole rat, which showed a diminished uniformly distributed selenium pool, but preserved the circular proximal tubule signal. Innovation: We applied XFM to image selenium in mammalian tissues and identified a highly localized pool of this trace element at the basement membrane of kidneys that was associated with GPx3. Conclusion: XFM allowed us to define and explain the tissue topography of selenium in mammalian kidneys at submicron resolution. Antioxid. Redox Signal. 16, 185–192. PMID:21854231
Keeler, Bonnie L.; Gourevitch, Jesse D.; Polasky, Stephen; Isbell, Forest; Tessum, Chris W.; Hill, Jason D.; Marshall, Julian D.
2016-01-01
Despite growing recognition of the negative externalities associated with reactive nitrogen (N), the damage costs of N to air, water, and climate remain largely unquantified. We propose a comprehensive approach for estimating the social cost of nitrogen (SCN), defined as the present value of the monetary damages caused by an incremental increase in N. This framework advances N accounting by considering how each form of N causes damages at specific locations as it cascades through the environment. We apply the approach to an empirical example that estimates the SCN for N applied as fertilizer. We track impacts of N through its transformation into atmospheric and aquatic pools and estimate the distribution of associated costs to affected populations. Our results confirm that there is no uniform SCN. Instead, changes in N management will result in different N-related costs depending on where N moves and the location, vulnerability, and preferences of populations affected by N. For example, we found that the SCN per kilogram of N fertilizer applied in Minnesota ranges over several orders of magnitude, from less than $0.001/kg N to greater than $10/kg N, illustrating the importance of considering the site, the form of N, and end points of interest rather than assuming a uniform cost for damages. Our approach for estimating the SCN demonstrates the potential of integrated biophysical and economic models to illuminate the costs and benefits of N and inform more strategic and efficient N management. PMID:27713926
A quasilinear operator retaining magnetic drift effects in tokamak geometry
NASA Astrophysics Data System (ADS)
Catto, Peter J.; Lee, Jungpyo; Ram, Abhay K.
2017-12-01
The interaction of radio frequency waves with charged particles in a magnetized plasma is usually described by the quasilinear operator that was originally formulated by Kennel & Engelmann (Phys. Fluids, vol. 9, 1966, pp. 2377-2388). In their formulation the plasma is assumed to be homogenous and embedded in a uniform magnetic field. In tokamak plasmas the Kennel-Engelmann operator does not capture the magnetic drifts of the particles that are inherent to the non-uniform magnetic field. To overcome this deficiency a combined drift and gyrokinetic derivation is employed to derive the quasilinear operator for radio frequency heating and current drive in a tokamak with magnetic drifts retained. The derivation requires retaining the magnetic moment to higher order in both the unperturbed and perturbed kinetic equations. The formal prescription for determining the perturbed distribution function then follows a novel procedure in which two non-resonant terms must be evaluated explicitly. The systematic analysis leads to a diffusion equation that is compact and completely expressed in terms of the drift kinetic variables. The equation is not transit averaged, and satisfies the entropy principle, while retaining the full poloidal angle variation without resorting to Fourier decomposition. As the diffusion equation is in physical variables, it can be implemented in any computational code. In the Kennel-Engelmann formalism, the wave-particle resonant delta function is either for the Landau resonance or the Doppler shifted cyclotron resonance. In the combined gyro and drift kinetic approach, a term related to the magnetic drift modifies the resonance condition.
NASA Astrophysics Data System (ADS)
Galicher, R.; Marois, C.; Macintosh, B.; Zuckerman, B.; Barman, T.; Konopacky, Q.; Song, I.; Patience, J.; Lafrenière, D.; Doyon, R.; Nielsen, E. L.
2016-10-01
Context. Radial velocity and transit methods are effective for the study of short orbital period exoplanets but they hardly probe objects at large separations for which direct imaging can be used. Aims: We carried out the international deep planet survey of 292 young nearby stars to search for giant exoplanets and determine their frequency. Methods: We developed a pipeline for a uniform processing of all the data that we have recorded with NIRC2/Keck II, NIRI/Gemini North, NICI/Gemini South, and NACO/VLT for 14 yr. The pipeline first applies cosmetic corrections and then reduces the speckle intensity to enhance the contrast in the images. Results: The main result of the international deep planet survey is the discovery of the HR 8799 exoplanets. We also detected 59 visual multiple systems including 16 new binary stars and 2 new triple stellar systems, as well as 2279 point-like sources. We used Monte Carlo simulations and the Bayesian theorem to determine that 1.05+2.80-0.70% of stars harbor at least one giant planet between 0.5 and 14 MJ and between 20 and 300 AU. This result is obtained assuming uniform distributions of planet masses and semi-major axes. If we consider power law distributions as measured for close-in planets instead, the derived frequency is 2.30+5.95-1.55%, recalling the strong impact of assumptions on Monte Carlo output distributions. We also find no evidence that the derived frequency depends on the mass of the hosting star, whereas it does for close-in planets. Conclusions: The international deep planet survey provides a database of confirmed background sources that may be useful for other exoplanet direct imaging surveys. It also puts new constraints on the number of stars with at least one giant planet reducing by a factor of two the frequencies derived by almost all previous works. Tables 11-15 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/594/A63
Error Assessment of Global Ionosphere Models for the Vertical Electron Content
NASA Astrophysics Data System (ADS)
Dettmering, D.; Schmidt, M.
2012-04-01
The Total Electron Content (TEC) is a key parameter in ionosphere modeling. It has the major impact on the propagation of radio waves in the ionized atmosphere, which is crucial for terrestrial and Earth-space communications including navigation satellite systems such as GNSS. Most existing TEC models assume all free electrons condensed in one thin layer and neglect the vertical distribution (single-layer approach); those called Global Ionosphere Models (GIM) describe the Vertical Electron Content (VTEC) in dependency of latitude, longitude and time. The most common GIMs are computed by the International GNSS Service (IGS) and are based on GNSS measurements mapped from slant TEC to the vertical by simple mapping functions. Five analysis centers compute solutions which are combined to one final IGS product. In addition, global VTEC values from climatology ionosphere models such as IRI2007 and NIC09 are available. All these models have no (ore only sparse) input data over the oceans and show poorer accuracy in these regions. To overcome these disadvantages, the use of measurement data sets distributed uniformly over continents and open oceans is conducive. At DGFI, an approach has been developed using B-spline functions to model the VTEC in three dimensions. In addition to terrestrial GNSS measurements, data from satellite altimetry and radio occultation from Low Earth Orbiters (LEO) are used as input to ensure a more uniform data distribution. The accuracy of the different GIMs depends on the quality and quantity of the input data as well as the quality of the model approach and the actual ionosphere conditions. Most models provide RMS values together with the VTEC; however most of these values are only precisions and not meaningful for realistic error assessment. In order to get an impression on the absolute accuracy of the models in different regions, this contribution compares different GIMs (IGS, CODE, JPL, DGFI, IRI2007, and NIC09) to each other and to actual measurements. To cover different ionosphere conditions, two time periods of about two weeks are used, one in May 2002 with high solar activity and one in December 2008 with moderate activity. This procedure will provide more reasonable error estimates for the GIMs under investigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J; Dept of Radiation Oncology, New York Weill Cornell Medical Ctr, New York, NY
Purpose: To develop a generalized statistical model that incorporates the treatment uncertainty from the rotational error of single iso-center technique, and calculate the additional PTV (planning target volume) margin required to compensate for this error. Methods: The random vectors for setup and additional rotation errors in the three-dimensional (3D) patient coordinate system were assumed to follow the 3D independent normal distribution with zero mean, and standard deviations σx, σy, σz, for setup error and a uniform σR for rotational error. Both random vectors were summed, normalized and transformed to the spherical coordinates to derive the chi distribution with 3 degreesmore » of freedom for the radical distance ρ. PTV margin was determined using the critical value of this distribution for 0.05 significant level so that 95% of the time the treatment target would be covered by ρ. The additional PTV margin required to compensate for the rotational error was calculated as a function of σx, σy, σz and σR. Results: The effect of the rotational error is more pronounced for treatments that requires high accuracy/precision like stereotactic radiosurgery (SRS) or stereotactic body radiotherapy (SBRT). With a uniform 2mm PTV margin (or σx =σy=σz=0.7mm), a σR=0.32mm will decrease the PTV coverage from 95% to 90% of the time, or an additional 0.2mm PTV margin is needed to prevent this loss of coverage. If we choose 0.2 mm as the threshold, any σR>0.3mm will lead to an additional PTV margin that cannot be ignored, and the maximal σR that can be ignored is 0.0064 rad (or 0.37°) for iso-to-target distance=5cm, or 0.0032 rad (or 0.18°) for iso-to-target distance=10cm. Conclusions: The rotational error cannot be ignored for high-accuracy/-precision treatments like SRS/SBRT, particularly when the distance between the iso-center and target is large.« less
NASA Astrophysics Data System (ADS)
Leonard, E. M.; Laabs, B. J.; Refsnider, K. A.; Plummer, M. A.; Jacobsen, R. E.; Wollenberg, J. A.
2010-12-01
Global climate model (GCM) simulations of the last glacial maximum (LGM) in the western United States predict changes in atmospheric circulation and storm tracks that would have resulted in significantly less-than-modern precipitation in the Northwest and northern Rockies, and significantly more-than-modern precipitation in the Southwest and southern Rockies. Model simulations also suggest that late Pleistocene pluvial lakes in the intermontane West may have modified local moisture regimes in areas immediately downwind. In this study, we present results of the application of a coupled energy/mass balance and glacier-flow model (Plummer and Phillips, 2003) to reconstructed paleoglaciers in Rocky Mountains of Utah, New Mexico, Colorado, and Wyoming to assess the changes from modern climate that would have been necessary to sustain each glacier in mass-balance equilibrium at its LGM extent. Results demonstrate that strong west-to-east and north-to-south gradients in LGM precipitation, relative to present, would be required if a uniform LGM temperature depression with respect to modern is assumed across the region. At an assumed 7oC temperature depression, approximately modern precipitation would have been necessary to support LGM glaciation in the Colorado Front Range, significantly less than modern precipitation to support glaciation in the Teton Range, and almost twice modern precipitation to sustain glaciers in the Wasatch and Uinta ranges of Utah and the New Mexico Sangre de Cristo Range. The observed west-to-east (Utah-to-Colorado) LGM moisture gradient is consistent with precipitation enhancement from pluvial Lake Bonneville, decreasing with distance downwind from the lake. The north-to-south (Wyoming-to-New Mexico) LGM moisture gradient is consistent with a southward LGM displacement of the mean winter storm track associated with the winter position of the Pacific Jet Stream across the western U.S. Our analysis of paleoglacier extents in the Rocky Mountain region supports the results of GCM simulations of western U.S. precipitation distribution during the LGM, and suggests that this approach provides a practical means of testing such hypotheses about large-scale paleoclimate patterns. Finally, we note that most GCM results indicate greater LGM temperature depression in the northern and eastern portions of the study region than in its southern and western portions - which would necessitate LGM precipitation differences even greater than those determined based on an assumed uniform temperature depression.
SIMULATED HUMAN ERROR PROBABILITY AND ITS APPLICATION TO DYNAMIC HUMAN FAILURE EVENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herberger, Sarah M.; Boring, Ronald L.
Abstract Objectives: Human reliability analysis (HRA) methods typically analyze human failure events (HFEs) at the overall task level. For dynamic HRA, it is important to model human activities at the subtask level. There exists a disconnect between dynamic subtask level and static task level that presents issues when modeling dynamic scenarios. For example, the SPAR-H method is typically used to calculate the human error probability (HEP) at the task level. As demonstrated in this paper, quantification in SPAR-H does not translate to the subtask level. Methods: Two different discrete distributions were generated for each SPAR-H Performance Shaping Factor (PSF) tomore » define the frequency of PSF levels. The first distribution was a uniform, or uninformed distribution that assumed the frequency of each PSF level was equally likely. The second non-continuous distribution took the frequency of PSF level as identified from an assessment of the HERA database. These two different approaches were created to identify the resulting distribution of the HEP. The resulting HEP that appears closer to the known distribution, a log-normal centered on 1E-3, is the more desirable. Each approach then has median, average and maximum HFE calculations applied. To calculate these three values, three events, A, B and C are generated from the PSF level frequencies comprised of subtasks. The median HFE selects the median PSF level from each PSF and calculates HEP. The average HFE takes the mean PSF level, and the maximum takes the maximum PSF level. The same data set of subtask HEPs yields starkly different HEPs when aggregated to the HFE level in SPAR-H. Results: Assuming that each PSF level in each HFE is equally likely creates an unrealistic distribution of the HEP that is centered at 1. Next the observed frequency of PSF levels was applied with the resulting HEP behaving log-normally with a majority of the values under 2.5% HEP. The median, average and maximum HFE calculations did yield different answers for the HFE. The HFE maximum grossly over estimates the HFE, while the HFE distribution occurs less than HFE median, and greater than HFE average. Conclusions: Dynamic task modeling can be perused through the framework of SPAR-H. Identification of distributions associated with each PSF needs to be defined, and may change depending upon the scenario. However it is very unlikely that each PSF level is equally likely as the resulting HEP distribution is strongly centered at 100%, which is unrealistic. Other distributions may need to be identified for PSFs, to facilitate the transition to dynamic task modeling. Additionally discrete distributions need to be exchanged for continuous so that simulations for the HFE can further advance. This paper provides a method to explore dynamic subtask to task translation and provides examples of the process using the SPAR-H method.« less
NASA Astrophysics Data System (ADS)
Thompson, E. M.; Hewlett, J. B.; Baise, L. G.; Vogel, R. M.
2011-01-01
Annual maximum (AM) time series are incomplete (i.e., censored) when no events are included above the assumed censoring threshold (i.e., magnitude of completeness). We introduce a distrtibutional hypothesis test for left-censored Gumbel observations based on the probability plot correlation coefficient (PPCC). Critical values of the PPCC hypothesis test statistic are computed from Monte-Carlo simulations and are a function of sample size, censoring level, and significance level. When applied to a global catalog of earthquake observations, the left-censored Gumbel PPCC tests are unable to reject the Gumbel hypothesis for 45 of 46 seismic regions. We apply four different field significance tests for combining individual tests into a collective hypothesis test. None of the field significance tests are able to reject the global hypothesis that AM earthquake magnitudes arise from a Gumbel distribution. Because the field significance levels are not conclusive, we also compute the likelihood that these field significance tests are unable to reject the Gumbel model when the samples arise from a more complex distributional alternative. A power study documents that the censored Gumbel PPCC test is unable to reject some important and viable Generalized Extreme Value (GEV) alternatives. Thus, we cannot rule out the possibility that the global AM earthquake time series could arise from a GEV distribution with a finite upper bound, also known as a reverse Weibull distribution. Our power study also indicates that the binomial and uniform field significance tests are substantially more powerful than the more commonly used Bonferonni and false discovery rate multiple comparison procedures.
Evolving non-thermal electrons in simulations of black hole accretion
NASA Astrophysics Data System (ADS)
Chael, Andrew A.; Narayan, Ramesh; Saḑowski, Aleksander
2017-09-01
Current simulations of hot accretion flows around black holes assume either a single-temperature gas or, at best, a two-temperature gas with thermal ions and electrons. However, processes like magnetic reconnection and shocks can accelerate electrons into a non-thermal distribution, which will not quickly thermalize at the very low densities found in many systems. Such non-thermal electrons have been invoked to explain the infrared and X-ray spectra and strong variability of Sagittarius A* (Sgr A*), the black hole at the Galactic Center. We present a method for self-consistent evolution of a non-thermal electron population in the general relativistic magnetohydrodynamic code koral. The electron distribution is tracked across Lorentz factor space and is evolved in space and time, in parallel with thermal electrons, thermal ions and radiation. In this study, for simplicity, energy injection into the non-thermal distribution is taken as a fixed fraction of the local electron viscous heating rate. Numerical results are presented for a model with a low mass accretion rate similar to that of Sgr A*. We find that the presence of a non-thermal population of electrons has negligible effect on the overall dynamics of the system. Due to our simple uniform particle injection prescription, the radiative power in the non-thermal simulation is enhanced at large radii. The energy distribution of the non-thermal electrons shows a synchrotron cooling break, with the break Lorentz factor varying with location and time, reflecting the complex interplay between the local viscous heating rate, magnetic field strength and fluid velocity.
High-voltage electrode optimization towards uniform surface treatment by a pulsed volume discharge
NASA Astrophysics Data System (ADS)
Ponomarev, A. V.; Pedos, M. S.; Scherbinin, S. V.; Mamontov, Y. I.; Ponomarev, S. V.
2015-11-01
In this study, the shape and material of the high-voltage electrode of an atmospheric pressure plasma generation system were optimised. The research was performed with the goal of achieving maximum uniformity of plasma treatment of the surface of the low-voltage electrode with a diameter of 100 mm. In order to generate low-temperature plasma with the volume of roughly 1 cubic decimetre, a pulsed volume discharge was used initiated with a corona discharge. The uniformity of the plasma in the region of the low-voltage electrode was assessed using a system for measuring the distribution of discharge current density. The system's low-voltage electrode - collector - was a disc of 100 mm in diameter, the conducting surface of which was divided into 64 radially located segments of equal surface area. The current at each segment was registered by a high-speed measuring system controlled by an ARM™-based 32-bit microcontroller. To facilitate the interpretation of results obtained, a computer program was developed to visualise the results. The program provides a 3D image of the current density distribution on the surface of the low-voltage electrode. Based on the results obtained an optimum shape for a high-voltage electrode was determined. Uniformity of the distribution of discharge current density in relation to distance between electrodes was studied. It was proven that the level of non-uniformity of current density distribution depends on the size of the gap between electrodes. Experiments indicated that it is advantageous to use graphite felt VGN-6 (Russian abbreviation) as the material of the high-voltage electrode's emitting surface.
Robustness of power systems under a democratic-fiber-bundle-like model
NASA Astrophysics Data System (ADS)
Yaǧan, Osman
2015-06-01
We consider a power system with N transmission lines whose initial loads (i.e., power flows) L1,...,LN are independent and identically distributed with PL(x ) =P [L ≤x ] . The capacity Ci defines the maximum flow allowed on line i and is assumed to be given by Ci=(1 +α ) Li , with α >0 . We study the robustness of this power system against random attacks (or failures) that target a p fraction of the lines, under a democratic fiber-bundle-like model. Namely, when a line fails, the load it was carrying is redistributed equally among the remaining lines. Our contributions are as follows. (i) We show analytically that the final breakdown of the system always takes place through a first-order transition at the critical attack size p=1 -E/[L ] maxx(P [L >x ](α x +E [L |L >x ]) ) , where E [.] is the expectation operator; (ii) we derive conditions on the distribution PL(x ) for which the first-order breakdown of the system occurs abruptly without any preceding diverging rate of failure; (iii) we provide a detailed analysis of the robustness of the system under three specific load distributions—uniform, Pareto, and Weibull—showing that with the minimum load Lmin and mean load E [L ] fixed, Pareto distribution is the worst (in terms of robustness) among the three, whereas Weibull distribution is the best with shape parameter selected relatively large; (iv) we provide numerical results that confirm our mean-field analysis; and (v) we show that p is maximized when the load distribution is a Dirac delta function centered at E [L ] , i.e., when all lines carry the same load. This last finding is particularly surprising given that heterogeneity is known to lead to high robustness against random failures in many other systems.
Material Distribution Optimization for the Shell Aircraft Composite Structure
NASA Astrophysics Data System (ADS)
Shevtsov, S.; Zhilyaev, I.; Oganesyan, P.; Axenov, V.
2016-09-01
One of the main goal in aircraft structures designing isweight decreasing and stiffness increasing. Composite structures recently became popular in aircraft because of their mechanical properties and wide range of optimization possibilities.Weight distribution and lay-up are keys to creating lightweight stiff strictures. In this paperwe discuss optimization of specific structure that undergoes the non-uniform air pressure at the different flight conditions and reduce a level of noise caused by the airflowinduced vibrations at the constrained weight of the part. Initial model was created with CAD tool Siemens NX, finite element analysis and post processing were performed with COMSOL Multiphysicsr and MATLABr. Numerical solutions of the Reynolds averaged Navier-Stokes (RANS) equations supplemented by k-w turbulence model provide the spatial distributions of air pressure applied to the shell surface. At the formulation of optimization problem the global strain energy calculated within the optimized shell was assumed as the objective. Wall thickness has been changed using parametric approach by an initiation of auxiliary sphere with varied radius and coordinates of the center, which were the design variables. To avoid a local stress concentration, wall thickness increment was defined as smooth function on the shell surface dependent of auxiliary sphere position and size. Our study consists of multiple steps: CAD/CAE transformation of the model, determining wind pressure for different flow angles, optimizing wall thickness distribution for specific flow angles, designing a lay-up for optimal material distribution. The studied structure was improved in terms of maximum and average strain energy at the constrained expense ofweight growth. Developed methods and tools can be applied to wide range of shell-like structures made of multilayered quasi-isotropic laminates.
Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.
Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small. PMID:28713828
Enhancement of a 2D front-tracking algorithm with a non-uniform distribution of Lagrangian markers
NASA Astrophysics Data System (ADS)
Febres, Mijail; Legendre, Dominique
2018-04-01
The 2D front tracking method is enhanced to control the development of spurious velocities for non-uniform distributions of markers. The hybrid formulation of Shin et al. (2005) [7] is considered. A new tangent calculation is proposed for the calculation of the tension force at markers. A new reconstruction method is also proposed to manage non-uniform distributions of markers. We show that for both the static and the translating spherical drop test case the spurious currents are reduced to the machine precision. We also show that the ratio of the Lagrangian grid size Δs over the Eulerian grid size Δx has to satisfy Δs / Δx > 0.2 for ensuring such low level of spurious velocity. The method is found to provide very good agreement with benchmark test cases from the literature.
Modeling Error Distributions of Growth Curve Models through Bayesian Methods
ERIC Educational Resources Information Center
Zhang, Zhiyong
2016-01-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is…
Results on Vertex Degree and K-Connectivity in Uniform S-Intersection Graphs
2014-01-01
distribution. A uniform s-intersection graph models the topology of a secure wireless sensor network employing the widely used s-composite key predistribution scheme. Our theoretical findings is also confirmed by numerical results.
3D reconstruction from non-uniform point clouds via local hierarchical clustering
NASA Astrophysics Data System (ADS)
Yang, Jiaqi; Li, Ruibo; Xiao, Yang; Cao, Zhiguo
2017-07-01
Raw scanned 3D point clouds are usually irregularly distributed due to the essential shortcomings of laser sensors, which therefore poses a great challenge for high-quality 3D surface reconstruction. This paper tackles this problem by proposing a local hierarchical clustering (LHC) method to improve the consistency of point distribution. Specifically, LHC consists of two steps: 1) adaptive octree-based decomposition of 3D space, and 2) hierarchical clustering. The former aims at reducing the computational complexity and the latter transforms the non-uniform point set into uniform one. Experimental results on real-world scanned point clouds validate the effectiveness of our method from both qualitative and quantitative aspects.
Visualization of self-heating of an all climate battery by infrared thermography
NASA Astrophysics Data System (ADS)
Zhang, Guangsheng; Tian, Hua; Ge, Shanhai; Marple, Dan; Sun, Fengchun; Wang, Chao-Yang
2018-02-01
Self-heating Li-ion battery (SHLB), a.k.a. all climate battery, has provided a novel and practical solution to the low temperature power loss challenge. During its rapid self-heating, it is critical to keep the heating process and temperature distributions uniform for superior battery performance, durability and safety. Through infrared thermography of an experimental SHLB cell activated from various low ambient temperatures, we find that temperature distribution is uniform over the active electrode area, suggesting uniform heating. We also find that a hot spot exists at the activation terminal during self-heating, which provides diagnostics for improvement of next generation SHLB cells without the hot spot.
NASA Technical Reports Server (NTRS)
Bathke, C. G.
1976-01-01
Electron energy distribution functions were calculated in a U235 plasma at 1 atmosphere for various plasma temperatures and neutron fluxes. The distributions are assumed to be a summation of a high energy tail and a Maxwellian distribution. The sources of energetic electrons considered are the fission-fragment induced ionization of uranium and the electron induced ionization of uranium. The calculation of the high energy tail is reduced to an electron slowing down calculation, from the most energetic source to the energy where the electron is assumed to be incorporated into the Maxwellian distribution. The pertinent collisional processes are electron-electron scattering and electron induced ionization and excitation of uranium. Two distinct methods were employed in the calculation of the distributions. One method is based upon the assumption of continuous slowing and yields a distribution inversely proportional to the stopping power. An iteration scheme is utilized to include the secondary electron avalanche. In the other method, a governing equation is derived without assuming continuous electron slowing. This equation is solved by a Monte Carlo technique.
Apparent dispersion in transient groundwater flow
Goode, Daniel J.; Konikow, Leonard F.
1990-01-01
This paper investigates the effects of large-scale temporal velocity fluctuations, particularly changes in the direction of flow, on solute spreading in a two-dimensional aquifer. Relations for apparent longitudinal and transverse dispersivity are developed through an analytical solution for dispersion in a fluctuating, quasi-steady uniform flow field, in which storativity is zero. For transient flow, spatial moments are evaluated from numerical solutions. Ignored or unknown transients in the direction of flow primarily act to increase the apparent transverse dispersivity because the longitudinal dispersivity is acting in a direction that is not the assumed flow direction. This increase is a function of the angle between the transient flow vector and the assumed steady state flow direction and the ratio of transverse to longitudinal dispersivity. The maximum effect on transverse dispersivity occurs if storativity is assumed to be zero, such that the flow field responds instantly to boundary condition changes.
Gravitational Collapse of Magnetized Clouds. II. The Role of Ohmic Dissipation
NASA Astrophysics Data System (ADS)
Shu, Frank H.; Galli, Daniele; Lizano, Susana; Cai, Mike
2006-08-01
We formulate the problem of magnetic field dissipation during the accretion phase of low-mass star formation, and we carry out the first step of an iterative solution procedure by assuming that the gas is in free fall along radial field lines. This so-called ``kinematic approximation'' ignores the back reaction of the Lorentz force on the accretion flow. In quasi-steady state and assuming the resistivity coefficient to be spatially uniform, the problem is analytically soluble in terms of Legendre's polynomials and hypergeometric confluent functions. The dissipation of the magnetic field occurs inside a region of radius inversely proportional to the mass of the central star (the ``Ohm radius''), where the magnetic field becomes asymptotically straight and uniform. In our solution the magnetic flux problem of star formation is avoided because the magnetic flux dragged in the accreting protostar is always zero. Our results imply that the effective resistivity of the infalling gas must be higher by at least 1 order of magnitude than the microscopic electric resistivity, to avoid conflict with measurements of paleomagnetism in meteorites and with the observed luminosity of regions of low-mass star formation.
Shrinkage Estimation of Varying Covariate Effects Based On Quantile Regression
Peng, Limin; Xu, Jinfeng; Kutner, Nancy
2013-01-01
Varying covariate effects often manifest meaningful heterogeneity in covariate-response associations. In this paper, we adopt a quantile regression model that assumes linearity at a continuous range of quantile levels as a tool to explore such data dynamics. The consideration of potential non-constancy of covariate effects necessitates a new perspective for variable selection, which, under the assumed quantile regression model, is to retain variables that have effects on all quantiles of interest as well as those that influence only part of quantiles considered. Current work on l1-penalized quantile regression either does not concern varying covariate effects or may not produce consistent variable selection in the presence of covariates with partial effects, a practical scenario of interest. In this work, we propose a shrinkage approach by adopting a novel uniform adaptive LASSO penalty. The new approach enjoys easy implementation without requiring smoothing. Moreover, it can consistently identify the true model (uniformly across quantiles) and achieve the oracle estimation efficiency. We further extend the proposed shrinkage method to the case where responses are subject to random right censoring. Numerical studies confirm the theoretical results and support the utility of our proposals. PMID:25332515
Particle acceleration at shocks in the presence of a braided magnetic field
NASA Astrophysics Data System (ADS)
Kirk, J. G.; Duffy, P.; Gallant, Y. A.
1997-05-01
The theory of first order Fermi acceleration at shock fronts assumes charged particles undergo spatial diffusion in a uniform magnetic field. If, however, the magnetic field is not uniform, but has a stochastic or braided structure, the transport of charged particles across the average direction of the field is more complicated. Assuming quasi-linear behaviour of the field lines, the particles undergo sub-diffusion (
Design and testing of a uniformly solar energy TIR-R concentration lenses for HCPV systems.
Shen, S C; Chang, S J; Yeh, C Y; Teng, P C
2013-11-04
In this paper, total internal reflection-refraction (TIR-R) concentration (U-TIR-R-C) lens module were designed for uniformity using the energy configuration method to eliminate hot spots on the surface of solar cell and increase conversion efficiency. The design of most current solar concentrators emphasizes the high-power concentration of solar energy, however neglects the conversion inefficiency resulting from hot spots generated by uneven distributions of solar energy concentrated on solar cells. The energy configuration method proposed in this study employs the concept of ray tracing to uniformly distribute solar energy to solar cells through a U-TIR-R-C lens module. The U-TIR-R-C lens module adopted in this study possessed a 76-mm diameter, a 41-mm thickness, concentration ratio of 1134 Suns, 82.6% optical efficiency, and 94.7% uniformity. The experiments demonstrated that the U-TIR-R-C lens module reduced the core temperature of the solar cell from 108 °C to 69 °C and the overall temperature difference from 45 °C to 10 °C, and effectively relative increased the conversion efficiency by approximately 3.8%. Therefore, the U-TIR-R-C lens module designed can effectively concentrate a large area of sunlight onto a small solar cell, and the concentrated solar energy can be evenly distributed in the solar cell to achieve uniform irradiance and effectively eliminate hot spots.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Chen, W.; Li, J.
2014-07-01
Climate change may alter the spatial distribution, composition, structure and functions of plant communities. Transitional zones between biomes, or ecotones, are particularly sensitive to climate change. Ecotones are usually heterogeneous with sparse trees. The dynamics of ecotones are mainly determined by the growth and competition of individual plants in the communities. Therefore it is necessary to calculate the solar radiation absorbed by individual plants in order to understand and predict their responses to climate change. In this study, we developed an individual plant radiation model, IPR (version 1.0), to calculate solar radiation absorbed by individual plants in sparse heterogeneous woody plant communities. The model is developed based on geometrical optical relationships assuming that crowns of woody plants are rectangular boxes with uniform leaf area density. The model calculates the fractions of sunlit and shaded leaf classes and the solar radiation absorbed by each class, including direct radiation from the sun, diffuse radiation from the sky, and scattered radiation from the plant community. The solar radiation received on the ground is also calculated. We tested the model by comparing with the results of random distribution of plants. The tests show that the model results are very close to the averages of the random distributions. This model is efficient in computation, and can be included in vegetation models to simulate long-term transient responses of plant communities to climate change. The code and a user's manual are provided as Supplement of the paper.
Kilb, Debi; Hardebeck, J.L.
2006-01-01
We estimate the strike and dip of three California fault segments (Calaveras, Sargent, and a portion of the San Andreas near San Jaun Bautistia) based on principle component analysis of accurately located microearthquakes. We compare these fault orientations with two different first-motion focal mechanism catalogs: the Northern California Earthquake Data Center (NCEDC) catalog, calculated using the FPFIT algorithm (Reasenberg and Oppenheimer, 1985), and a catalog created using the HASH algorithm that tests mechanism stability relative to seismic velocity model variations and earthquake location (Hardebeck and Shearer, 2002). We assume any disagreement (misfit >30° in strike, dip, or rake) indicates inaccurate focal mechanisms in the catalogs. With this assumption, we can quantify the parameters that identify the most optimally constrained focal mechanisms. For the NCEDC/FPFIT catalogs, we find that the best quantitative discriminator of quality focal mechanisms is the station distribution ratio (STDR) parameter, an indicator of how the stations are distributed about the focal sphere. Requiring STDR > 0.65 increases the acceptable mechanisms from 34%–37% to 63%–68%. This suggests stations should be uniformly distributed surrounding, rather than aligning, known fault traces. For the HASH catalogs, the fault plane uncertainty (FPU) parameter is the best discriminator, increasing the percent of acceptable mechanisms from 63%–78% to 81%–83% when FPU ≤ 35°. The overall higher percentage of acceptable mechanisms and the usefulness of the formal uncertainty in identifying quality mechanisms validate the HASH approach of testing for mechanism stability.
Spatial distribution of the RF power absorbed in a helicon plasma source
NASA Astrophysics Data System (ADS)
Aleksenko, O. V.; Miroshnichenko, V. I.; Mordik, S. N.
2014-08-01
The spatial distributions of the RF power absorbed by plasma electrons in an ion source operating in the helicon mode (ω ci < ω < ω ce < ω pe ) are studied numerically by using a simplified model of an RF plasma source in an external uniform magnetic field. The parameters of the source used in numerical simulations are determined by the necessity of the simultaneous excitation of two types of waves, helicons and Trivelpiece-Gould modes, for which the corresponding transparency diagrams are used. The numerical simulations are carried out for two values of the working gas (helium) pressure and two values of the discharge chamber length under the assumption that symmetric modes are excited. The parameters of the source correspond to those of the injector of the nuclear scanning microprobe operating at the Institute of Applied Physics, National Academy of Sciences of Ukraine. It is assumed that the mechanism of RF power absorption is based on the acceleration of plasma electrons in the field of a Trivelpiece-Gould mode, which is interrupted by pair collisions of plasma electrons with neutral atoms and ions of the working gas. The simulation results show that the total absorbed RF power at a fixed plasma density depends in a resonant manner on the magnetic field. The resonance is found to become smoother with increasing working gas pressure. The distributions of the absorbed RF power in the discharge chamber are presented. The achievable density of the extracted current is estimated using the Bohm criterion.
Effect of Heterogeneous Investments on the Evolution of Cooperation in Spatial Public Goods Game
Huang, Keke; Wang, Tao; Cheng, Yuan; Zheng, Xiaoping
2015-01-01
Understanding the emergence of cooperation in spatial public goods game remains a grand challenge across disciplines. In most previous studies, it is assumed that the investments of all the cooperators are identical, and often equal to 1. However, it is worth mentioning that players are diverse and heterogeneous when choosing actions in the rapidly developing modern society and researchers have shown more interest to the heterogeneity of players recently. For modeling the heterogeneous players without loss of generality, it is assumed in this work that the investment of a cooperator is a random variable with uniform distribution, the mean value of which is equal to 1. The results of extensive numerical simulations convincingly indicate that heterogeneous investments can promote cooperation. Specifically, a large value of the variance of the random variable can decrease the two critical values for the result of behavioral evolution effectively. Moreover, the larger the variance is, the better the promotion effect will be. In addition, this article has discussed the impact of heterogeneous investments when the coevolution of both strategy and investment is taken into account. Comparing the promotion effect of coevolution of strategy and investment with that of strategy imitation only, we can conclude that the coevolution of strategy and investment decreases the asymptotic fraction of cooperators by weakening the heterogeneity of investments, which further demonstrates that heterogeneous investments can promote cooperation in spatial public goods game. PMID:25781345
Modeling and simulation of Li-ion conduction in poly(ethylene oxide)
NASA Astrophysics Data System (ADS)
Gitelman, L.; Israeli, M.; Averbuch, A.; Nathan, M.; Schuss, Z.; Golodnitsky, D.
2007-12-01
Polyethylene oxide (PEO) containing a lithium salt (e.g., LiI) serves as a solid polymer electrolyte (SPE) in thin-film batteries and its ionic conductivity is a key parameter of their performance. We model and simulate Li + ion conduction in a single PEO molecule. Our simplified stochastic model of ionic motion is based on an analogy between protein channels of biological membranes that conduct Na +, K +, and other ions, and the PEO helical chain that conducts Li + ions. In contrast with protein channels and salt solutions, the PEO is both the channel and the solvent for the lithium salt (e.g., LiI). The mobile ions are treated as charged spherical Brownian particles. We simulate Smoluchowski dynamics in channels with a radius of ca. 0.1 nm and study the effect of stretching and temperature on ion conductivity. We assume that each helix (molecule) forms a random angle with the axis between these electrodes and the polymeric film is composed of many uniformly distributed oriented boxes that include molecules with the same direction. We further assume that mechanical stretching aligns the molecular structures in each box along the axis of stretching (intra-box alignment). Our model thus predicts the PEO conductivity as a function of the stretching, the salt concentration and the temperature. The computed enhancement of the ionic conductivity in the stretch direction is in good agreement with experimental results. The simulation results are also in qualitative agreement with recent theoretical and experimental results.
NASA Technical Reports Server (NTRS)
Siegel, R.; Sparrow, E. M.
1960-01-01
The purpose of this note is to examine in a more precise way how the Nusselt numbers for turbulent heat transfer in both the fully developed and thermal entrance regions of a circular tube are affected by two different wall boundary conditions. The comparisons are made for: (a) Uniform wall temperature (UWT); and (b) uniform wall heat flux (UHF). Several papers which have been concerned with the turbulent thermal entrance region problem are given. 1 Although these analyses have all utilized an eigenvalue formulation for the thermal entrance region there were differences in the choices of eddy diffusivity expressions, velocity distributions, and methods for carrying out the numerical solutions. These differences were also found in the fully developed analyses. Hence when making a comparison of the analytical results for uniform wall temperature and uniform wall heat flux, it was not known if differences in the Nusselt numbers could be wholly attributed to the difference in wall boundary conditions, since all the analytical results were not obtained in a consistent way. To have results which could be directly compared, computations were carried out for the uniform wall temperature case, using the same eddy diffusivity, velocity distribution, and digital computer program employed for uniform wall heat flux. In addition, the previous work was extended to a lower Reynolds number range so that comparisons could be made over a wide range of both Reynolds and Prandtl numbers.
Variable area fuel cell process channels
Kothmann, Richard E.
1981-01-01
A fuel cell arrangement having a non-uniform distribution of fuel and oxidant flow paths, on opposite sides of an electrolyte matrix, sized and positioned to provide approximately uniform fuel and oxidant utilization rates, and cell conditions, across the entire cell.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Uniform Test Method is used to test more than one unit of a basic model to determine the efficiency of... one ampere and the test current is limited to 15 percent of the winding current. Connect the... 10 Energy 3 2014-01-01 2014-01-01 false Uniform Test Method for Measuring the Energy Consumption...
Confined energy distribution for charged particle beams
Jason, Andrew J.; Blind, Barbara
1990-01-01
A charged particle beam is formed to a relatively larger area beam which is well-contained and has a beam area which relatively uniformly deposits energy over a beam target. Linear optics receive an accelerator beam and output a first beam with a first waist defined by a relatively small size in a first dimension normal to a second dimension. Nonlinear optics, such as an octupole magnet, are located about the first waist and output a second beam having a phase-space distribution which folds the beam edges along the second dimension toward the beam core to develop a well-contained beam and a relatively uniform particle intensity across the beam core. The beam may then be expanded along the second dimension to form the uniform ribbon beam at a selected distance from the nonlinear optics. Alternately, the beam may be passed through a second set of nonlinear optics to fold the beam edges in the first dimension. The beam may then be uniformly expanded along the first and second dimensions to form a well-contained, two-dimensional beam for illuminating a two-dimensional target with a relatively uniform energy deposition.
NASA Astrophysics Data System (ADS)
Kokka, Alexander; Pulli, Tomi; Poikonen, Tuomas; Askola, Janne; Ikonen, Erkki
2017-08-01
This paper presents a fisheye camera method for determining spatial non-uniformity corrections in luminous flux measurements with integrating spheres. Using a fisheye camera installed into a port of an integrating sphere, the relative angular intensity distribution of the lamp under test is determined. This angular distribution is used for calculating the spatial non-uniformity correction for the lamp when combined with the spatial responsivity data of the sphere. The method was validated by comparing it to a traditional goniophotometric approach when determining spatial correction factors for 13 LED lamps with different angular spreads. The deviations between the spatial correction factors obtained using the two methods ranged from -0.15 % to 0.15%. The mean magnitude of the deviations was 0.06%. For a typical LED lamp, the expanded uncertainty (k = 2 ) for the spatial non-uniformity correction factor was evaluated to be 0.28%. The fisheye camera method removes the need for goniophotometric measurements in determining spatial non-uniformity corrections, thus resulting in considerable system simplification. Generally, no permanent modifications to existing integrating spheres are required.
Paillet, Frederick L.
1985-01-01
Acoustic-waveform and acoustic-televiewer logs were obtained for a 400-meter interval of deeply buried basalt flows in three boreholes, and over shorter intervals in two additional boreholes located on the U.S. Department of Energy 's Hanford site in Benton County, Washington. Borehole-wall breakouts were observed in the unaltered interiors of a large part of individual basalt flows; however, several of the flows in one of the five boreholes had almost no breakouts. The distribution of breakouts observed on the televiewer logs correlated closely with the incidence of core disking in some intervals, but the correlation was not always perfect, perhaps because of the differences in the specific fracture mechanisms involved. Borehole-wall breakouts were consistently located on the east and west sides of the boreholes. The orientation is consistent with previous estimates of the principal horizontal-stress field in south-central Washington, if breakouts are assumed to form along the azimuth of the least principal stress. The distribution of breakouts repeatedly indicated an interval of breakout-free rock at the top and bottom of flows. Because breakouts frequently terminate at major low-angle fractures, the data indicate that fracturing may have relieved some of the horizontal stresses near flow tops and bottoms. Unaltered and unfractured basalt appeared to have a uniform compressional velocity of 6.0 + or - 0.1 km/sec and a uniform shear velocity of 3.35 + or - 0.1 km/sec throughout flow interiors. Acoustics-waveform logs also indicated that borehole-wall breakouts did not affect acoustic propagation along the borehole; so fracturing associated with the formation of breakouts appeared to be confined to a thin annulus of stress concentration around the borehole. Televiewer logs obtained before and after hydraulic fracturing in these boreholes indicated the extent of induced fractures, and also indicated minor changes to pre-existing fractures that may have been inflated during fracture generation. (USGS)
Higgs and superparticle mass predictions from the landscape
NASA Astrophysics Data System (ADS)
Baer, Howard; Barger, Vernon; Serce, Hasan; Sinha, Kuver
2018-03-01
Predictions for the scale of SUSY breaking from the string landscape go back at least a decade to the work of Denef and Douglas on the statistics of flux vacua. The assumption that an assortment of SUSY breaking F and D terms are present in the hidden sector, and their values are uniformly distributed in the landscape of D = 4, N = 1 effective supergravity models, leads to the expectation that the landscape pulls towards large values of soft terms favored by a power law behavior P( m soft) ˜ m soft n . On the other hand, similar to Weinberg's prediction of the cosmological constant, one can assume an anthropic selection of weak scales not too far from the measured value characterized by m W,Z,h ˜ 100 GeV. Working within a fertile patch of gravity-mediated low energy effective theories where the superpotential μ term is ≪ m 3/2, as occurs in models such as radiative breaking of Peccei-Quinn symmetry, this biases statistical distributions on the landscape by a cutoff on the parameter ΔEW, which measures fine-tuning in the m Z - μ mass relation. The combined effect of statistical and anthropic pulls turns out to favor low energy phenomenology that is more or less agnostic to UV physics. While a uniform selection n = 0 of soft terms produces too low a value for m h , taking n = 1 and 2 produce most probabilistically m h ˜ 125 GeV for negative trilinear terms. For n ≥ 1, there is a pull towards split generations with {m}_{\\tilde{q},\\tilde{ℓ}}(1,2)˜ 10-30 TeV whilst {m}_{{\\tilde{t}}_1}˜ 1-2 TeV . The most probable gluino mass comes in at ˜ 3 - 4 TeV — apparently beyond the reach of HL-LHC (although the required quasi-degenerate higgsinos should still be within reach). We comment on consequences for SUSY collider and dark matter searches.
Combined Loads Test Fixture for Thermal-Structural Testing Aerospace Vehicle Panel Concepts
NASA Technical Reports Server (NTRS)
Fields, Roger A.; Richards, W. Lance; DeAngelis, Michael V.
2004-01-01
A structural test requirement of the National Aero-Space Plane (NASP) program has resulted in the design, fabrication, and implementation of a combined loads test fixture. Principal requirements for the fixture are testing a 4- by 4-ft hat-stiffened panel with combined axial (either tension or compression) and shear load at temperatures ranging from room temperature to 915 F, keeping the test panel stresses caused by the mechanical loads uniform, and thermal stresses caused by non-uniform panel temperatures minimized. The panel represents the side fuselage skin of an experimental aerospace vehicle, and was produced for the NASP program. A comprehensive mechanical loads test program using the new test fixture has been conducted on this panel from room temperature to 500 F. Measured data have been compared with finite-element analyses predictions, verifying that uniform load distributions were achieved by the fixture. The overall correlation of test data with analysis is excellent. The panel stress distributions and temperature distributions are very uniform and fulfill program requirements. This report provides details of an analytical and experimental validation of the combined loads test fixture. Because of its simple design, this unique test fixture can accommodate panels from a variety of aerospace vehicle designs.
NASA Technical Reports Server (NTRS)
Campbell, W.
1981-01-01
A theoretical evaluation of the stability of an explicit finite difference solution of the transient temperature field in a composite medium is presented. The grid points of the field are assumed uniformly spaced, and media interfaces are either vertical or horizontal and pass through grid points. In addition, perfect contact between different media (infinite interfacial conductance) is assumed. A finite difference form of the conduction equation is not valid at media interfaces; therefore, heat balance forms are derived. These equations were subjected to stability analysis, and a computer graphics code was developed that permitted determination of a maximum time step for a given grid spacing.
2009-09-01
non-uniform, stationary rotation / non- Distribution A: Approved for public release; distribution is unlimited. 8 stationary rotation , mass...Cayley spectral transformation as a means of rotating the basin of convergence of the Arnoldi algorithm. Instead of doing the inversion of the large...pair of counter rotating streamwise vortices embedded in uniform shear flow. Consistently with earlier work by the same group, the main present finding
Effects of small variations of speed of sound in optoacoustic tomographic imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deán-Ben, X. Luís; Ntziachristos, Vasilis; Razansky, Daniel, E-mail: dr@tum.de
2014-07-15
Purpose: Speed of sound difference in the imaged object and surrounding coupling medium may reduce the resolution and overall quality of optoacoustic tomographic reconstructions obtained by assuming a uniform acoustic medium. In this work, the authors investigate the effects of acoustic heterogeneities and discuss potential benefits of accounting for those during the reconstruction procedure. Methods: The time shift of optoacoustic signals in an acoustically heterogeneous medium is studied theoretically by comparing different continuous and discrete wave propagation models. A modification of filtered back-projection reconstruction is subsequently implemented by considering a straight acoustic rays model for ultrasound propagation. The results obtainedmore » with this reconstruction procedure are compared numerically and experimentally to those obtained assuming a heuristically fitted uniform speed of sound in both full-view and limited-view optoacoustic tomography scenarios. Results: The theoretical analysis showcases that the errors in the time-of-flight of the signals predicted by considering the straight acoustic rays model tend to be generally small. When using this model for reconstructing simulated data, the resulting images accurately represent the theoretical ones. On the other hand, significant deviations in the location of the absorbing structures are found when using a uniform speed of sound assumption. The experimental results obtained with tissue-mimicking phantoms and a mouse postmortem are found to be consistent with the numerical simulations. Conclusions: Accurate analysis of effects of small speed of sound variations demonstrates that accounting for differences in the speed of sound allows improving optoacoustic reconstruction results in realistic imaging scenarios involving acoustic heterogeneities in tissues and surrounding media.« less
NASA Astrophysics Data System (ADS)
Abbondanza, Claudio; Sarti, Pierguido
2010-08-01
This paper sets the rules for an optimal definition of precise signal path variation (SPV) models, revising and highlighting the deficiencies in the calculations adopted in previous studies and improving the computational approach. Hence, the linear coefficients that define the SPV model are rigorously determined. The equations that are presented depend on the dimensions and the focal lengths of the telescopes as well as on the feed illumination taper. They hold for any primary focus or Cassegrainian very long baseline interferometry (VLBI) telescope. Earlier investigations usually determined the SPV models assuming a uniform illumination of the telescope mirrors. We prove this hypothesis to be over-simplistic by comparing results derived adopting (a) uniform, (b) Gaussian and (c) binomial illumination functions. Numerical computations are developed for AZ-EL mount, 32 m Medicina and Noto (Italy) VLBI telescopes, these latter being the only telescopes which possess thorough information on gravity-dependent deformation patterns. Particularly, assuming a Gaussian illumination function, the SPV in primary focus over the elevation range [0°, 90°] is 10.1 and 7.2 mm, for Medicina and Noto, respectively. With uniform illumination function the maximal path variation for Medicina is 17.6 and 12.7 mm for Noto, thus highlighting the strong dependency on the choice of the illumination function. According to our findings, a revised SPV model is released for Medicina and a model for Noto is presented here for the first time. Currently, no other VLBI telescope possesses SPV models capable of correcting gravity-dependent observation biases.
Lim, Jing; Chong, Mark Seow Khoon; Chan, Jerry Kok Yen; Teoh, Swee-Hin
2014-06-25
Synthetic polymers used in tissue engineering require functionalization with bioactive molecules to elicit specific physiological reactions. These additives must be homogeneously dispersed in order to achieve enhanced composite mechanical performance and uniform cellular response. This work demonstrates the use of a solvent-free powder processing technique to form osteoinductive scaffolds from cryomilled polycaprolactone (PCL) and tricalcium phosphate (TCP). Cryomilling is performed to achieve micrometer-sized distribution of PCL and reduce melt viscosity, thus improving TCP distribution and improving structural integrity. A breakthrough is achieved in the successful fabrication of 70 weight percentage of TCP into a continuous film structure. Following compaction and melting, PCL/TCP composite scaffolds are found to display uniform distribution of TCP throughout the PCL matrix regardless of composition. Homogeneous spatial distribution is also achieved in fabricated 3D scaffolds. When seeded onto powder-processed PCL/TCP films, mesenchymal stem cells are found to undergo robust and uniform osteogenic differentiation, indicating the potential application of this approach to biofunctionalize scaffolds for tissue engineering applications. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Experimental verification of gain drop due to general ion recombination for a carbon-ion pencil beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tansho, Ryohei, E-mail: r-tansho@nirs.go.jp; Furukawa, Takuji; Hara, Yousuke
Purpose: Accurate dose measurement in radiotherapy is critically dependent on correction for gain drop, which is the difference of the measured current from the ideal saturation current due to general ion recombination. Although a correction method based on the Boag theory has been employed, the theory assumes that ionized charge density in an ionization chamber (IC) is spatially uniform throughout the irradiation volume. For particle pencil beam scanning, however, the charge density is not uniform, because the fluence distribution of a pencil beam is not uniform. The aim of this study was to verify the effect of the nonuniformity ofmore » ionized charge density on the gain drop due to general ion recombination. Methods: The authors measured the saturation curve, namely, the applied voltage versus measured current, using a large plane-parallel IC and 24-channel parallel-plate IC with concentric electrodes. To verify the effect of the nonuniform ionized charge density on the measured saturation curve, the authors calculated the saturation curve using a method which takes into account the nonuniform ionized charge density and compared it with the measured saturation curves. Results: Measurement values of the different saturation curves in the different channels of the concentric electrodes differed and were consistent with the calculated values. The saturation curves measured by the large plane-parallel IC were also consistent with the calculation results, including the estimation error of beam size and of setup misalignment. Although the impact of the nonuniform ionized charge density on the gain drop was clinically negligible with the conventional beam intensity, it was expected that the impact would increase with higher ionized charge density. Conclusions: For pencil beam scanning, the assumption of the conventional Boag theory is not valid. Furthermore, the nonuniform ionized charge density affects the prediction accuracy of gain drop when the ionized charge density is increased by a higher dose rate and/or lower beam size.« less
NASA Astrophysics Data System (ADS)
Nicolaou, G.; Yamauchi, M.; Wieser, M.; Nilsson, H.; Behar, E.; Stenberg Wieser, G.
2016-12-01
The Ion Composition Analyzer (ICA) on board ROSETTA is a part of the Rosetta Plasma Consortium (RPC). It is designed to measure the 3-D velocity distribution function of the plasma ions in the environment of the comet 67P/Churyumov-Gerasimenko. Besides the solar wind plasma ions, ICA detected the heavy ions of cometary origin at both low energy (< 100 eV) and at the keV range. So far, ICA distinguished ions of water origin but in principle it should be able to detect CO2+. However, we have not yet succeeded to separate CO2+ ions from O+ or H2O+ ion, mainly due to non-uniform sensitivity and noise level at different mass-channels//azimuthal-sectors, and high cross talk. In May 2016, when ROSETTA was relatively close to the comet (between 6 and 20 km), we observe a second plasma ion population in a higher energy per charge range ( 60-200 eV/q) than the water group ions at 30- 50eV/q. To examine whether this secondary population is still the water group or other ions, such as CO2+, we cleaned the raw data by correcting the non-uniform sensitivity assuming that the noise level should be uniform over different channels. After such a simple cleaning we already found that the mass peak at low energy and that for higher-energy are occasionally similar but in some cases are quite different. Furthermore, we investigate few cases where the low-energy mass peak seems to consist of different Gaussian slopes, indicating that this peak could be composed of two mass peaks. In this presentation we show our techniques we follow to process the data, and we show how we identify the secondary ion component from the May 2016 data.
Chou, Cheng-Ying; Huang, Chih-Kang; Lu, Kuo-Wei; Horng, Tzyy-Leng; Lin, Win-Li
2013-01-01
The transport and accumulation of anticancer nanodrugs in tumor tissues are affected by many factors including particle properties, vascular density and leakiness, and interstitial diffusivity. It is important to understand the effects of these factors on the detailed drug distribution in the entire tumor for an effective treatment. In this study, we developed a small-scale mathematical model to systematically study the spatiotemporal responses and accumulative exposures of macromolecular carriers in localized tumor tissues. We chose various dextrans as model carriers and studied the effects of vascular density, permeability, diffusivity, and half-life of dextrans on their spatiotemporal concentration responses and accumulative exposure distribution to tumor cells. The relevant biological parameters were obtained from experimental results previously reported by the Dreher group. The area under concentration-time response curve (AUC) quantified the extent of tissue exposure to a drug and therefore was considered more reliable in assessing the extent of the overall drug exposure than individual concentrations. The results showed that 1) a small macromolecule can penetrate deep into the tumor interstitium and produce a uniform but low spatial distribution of AUC; 2) large macromolecules produce high AUC in the perivascular region, but low AUC in the distal region away from vessels; 3) medium-sized macromolecules produce a relatively uniform and high AUC in the tumor interstitium between two vessels; 4) enhancement of permeability can elevate the level of AUC, but have little effect on its uniformity while enhancement of diffusivity is able to raise the level of AUC and improve its uniformity; 5) a longer half-life can produce a deeper penetration and a higher level of AUC distribution. The numerical results indicate that a long half-life carrier in plasma and a high interstitial diffusivity are the key factors to produce a high and relatively uniform spatial AUC distribution in the interstitium. PMID:23565142
Earthquake Potential Models for China
NASA Astrophysics Data System (ADS)
Rong, Y.; Jackson, D. D.
2002-12-01
We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic model we derived the uniform upper magnitude limit from the special catalog and assumed local a-values proportional to maximum horizontal strain rate. In prospective tests the geodetic model agrees well with earthquake occurrence. The smoothed seismicity model performs best of the four models.
Schmitz, Max; Dähler, Fabian; Elvinger, François; Pedretti, Andrea; Steinfeld, Aldo
2017-04-10
We introduce a design methodology for nonimaging, single-reflection mirrors with polygonal inlet apertures that generate a uniform irradiance distribution on a polygonal outlet aperture, enabling a multitude of applications within the domain of concentrated photovoltaics. Notably, we present single-mirror concentrators of square and hexagonal perimeter that achieve very high irradiance uniformity on a square receiver at concentrations ranging from 100 to 1000 suns. These optical designs can be assembled in compound concentrators with maximized active area fraction by leveraging tessellation. More advanced multi-mirror concentrators, where each mirror individually illuminates the whole area of the receiver, allow for improved performance while permitting greater flexibility for the concentrator shape and robustness against partial shading of the inlet aperture.
Use of Radon for Evaluation of Atmospheric Transport Models: Sensitivity to Emissions
NASA Technical Reports Server (NTRS)
Gupta, Mohan L.; Douglass, Anne R.; Kawa, S. Randolph; Pawson, Steven
2004-01-01
This paper presents comparative analyses of atmospheric radon (Rn) distributions simulated using different emission scenarios and the observations. Results indicate that the model generally reproduces observed distributions of Rn but there are some biases in the model related to differences in large-scale and convective transport. Simulations presented here use an off-line three-dimensional chemical transport model driven by assimilated winds and two scenarios of Rn fluxes (atom/cm s) from ice-free land surfaces: (A) globally uniform flux of 1.0, and (B) uniform flux of 1.0 between 60 deg. S and 30 deg. N followed by a sharp linear decrease to 0.2 at 70 deg. N. We considered an additional scenario (C) where Rn emissions for case A were uniformly reduced by 28%. Results show that case A overpredicts observed Rn distributions in both hemispheres. Simulated northern hemispheric (NH) Rn distributions from cases B and C compare better with the observations, but are not discernible from each other. In the southern hemisphere, surface Rn distributions from case C compare better with the observations. We performed a synoptic scale source-receptor analysis for surface Rn to locate regions with ratios B/A and B/C less than 0.5. Considering an uncertainty in regional Rn emissions of a factor of two, our analysis indicates that additional measurements of surface Rn particularly during April-October and north of 50 deg. N over the Pacific as well as Atlantic regions would make it possible to determine if the proposed latitude gradient in Rn emissions is superior to a uniform flux scenario.
Advanced Technology for Ultra-Low Power System-on-Chip (SoC)
2017-06-01
design at IDS=1mA/μm compared with that in experimental 14nm-node FinFET. The redistributed electric field along the channel length direction can... design can result in more uniform electron density and electron velocity distributions compared to a homojunction device. This uniform electron... design at IDS=1mA/μm compared with that in experimental 14nm-node FinFET. 14 Approved for public release, distribution is unlimited. 0 5 10 15 20