Sample records for volume averaged model

  1. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample... per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method... appendix A of this part) Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour...

  2. Plasma properties in electron-bombardment ion thrusters

    NASA Technical Reports Server (NTRS)

    Matossian, J. N.; Beattie, J. R.

    1987-01-01

    The paper describes a technique for computing volume-averaged plasma properties within electron-bombardment ion thrusters, using spatially varying Langmuir-probe measurements. Average values of the electron densities are defined by integrating the spatially varying Maxwellian and primary electron densities over the ionization volume, and then dividing by the volume. Plasma properties obtained in the 30-cm-diameter J-series and ring-cusp thrusters are analyzed by the volume-averaging technique. The superior performance exhibited by the ring-cusp thruster is correlated with a higher average Maxwellian electron temperature. The ring-cusp thruster maintains the same fraction of primary electrons as does the J-series thruster, but at a much lower ion production cost. The volume-averaged predictions for both thrusters are compared with those of a detailed thruster performance model.

  3. A Note on Spatial Averaging and Shear Stresses Within Urban Canopies

    NASA Astrophysics Data System (ADS)

    Xie, Zheng-Tong; Fuka, Vladimir

    2018-04-01

    One-dimensional urban models embedded in mesoscale numerical models may place several grid points within the urban canopy. This requires an accurate parametrization for shear stresses (i.e. vertical momentum fluxes) including the dispersive stress and momentum sinks at these points. We used a case study with a packing density of 33% and checked rigorously the vertical variation of spatially-averaged total shear stress, which can be used in a one-dimensional column urban model. We found that the intrinsic spatial average, in which the volume or area of the solid parts are not included in the average process, yield greater time-spatial average of total stress within the canopy and a more evident abrupt change at the top of the buildings than the comprehensive spatial average, in which the volume or area of the solid parts are included in the average.

  4. Long-Term Prediction of Emergency Department Revenue and Visitor Volume Using Autoregressive Integrated Moving Average Model

    PubMed Central

    Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi

    2011-01-01

    This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume. PMID:22203886

  5. Long-term prediction of emergency department revenue and visitor volume using autoregressive integrated moving average model.

    PubMed

    Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi

    2011-01-01

    This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  6. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.

  7. Nonlinear mesomechanics of composites with periodic microstructure

    NASA Technical Reports Server (NTRS)

    Walker, Kevin P.; Jordan, Eric H.; Freed, Alan D.

    1989-01-01

    This work is concerned with modeling the mechanical deformation or constitutive behavior of composites comprised of a periodic microstructure under small displacement conditions at elevated temperature. A mesomechanics approach is adopted which relates the microimechanical behavior of the heterogeneous composite with its in-service macroscopic behavior. Two different methods, one based on a Fourier series approach and the other on a Green's function approach, are used in modeling the micromechanical behavior of the composite material. Although the constitutive formulations are based on a micromechanical approach, it should be stressed that the resulting equations are volume averaged to produce overall effective constitutive relations which relate the bulk, volume averaged, stress increment to the bulk, volume averaged, strain increment. As such, they are macromodels which can be used directly in nonlinear finite element programs such as MARC, ANSYS and ABAQUS or in boundary element programs such as BEST3D. In developing the volume averaged or efective macromodels from the micromechanical models, both approaches will require the evaluation of volume integrals containing the spatially varying strain distributions throughout the composite material. By assuming that the strain distributions are spatially constant within each constituent phase-or within a given subvolume within each constituent phase-of the composite material, the volume integrals can be obtained in closed form. This simplified micromodel can then be volume averaged to obtain an effective macromodel suitable for use in the MARC, ANSYS and ABAQUS nonlinear finite element programs via user constitutive subroutines such as HYPELA and CMUSER. This effective macromodel can be used in a nonlinear finite element structural analysis to obtain the strain-temperature history at those points in the structure where thermomechanical cracking and damage are expected to occur, the so called damage critical points of the structure.

  8. A Lagrangian dynamic subgrid-scale model turbulence

    NASA Technical Reports Server (NTRS)

    Meneveau, C.; Lund, T. S.; Cabot, W.

    1994-01-01

    A new formulation of the dynamic subgrid-scale model is tested in which the error associated with the Germano identity is minimized over flow pathlines rather than over directions of statistical homogeneity. This procedure allows the application of the dynamic model with averaging to flows in complex geometries that do not possess homogeneous directions. The characteristic Lagrangian time scale over which the averaging is performed is chosen such that the model is purely dissipative, guaranteeing numerical stability when coupled with the Smagorinsky model. The formulation is tested successfully in forced and decaying isotropic turbulence and in fully developed and transitional channel flow. In homogeneous flows, the results are similar to those of the volume-averaged dynamic model, while in channel flow, the predictions are superior to those of the plane-averaged dynamic model. The relationship between the averaged terms in the model and vortical structures (worms) that appear in the LES is investigated. Computational overhead is kept small (about 10 percent above the CPU requirements of the volume or plane-averaged dynamic model) by using an approximate scheme to advance the Lagrangian tracking through first-order Euler time integration and linear interpolation in space.

  9. A highly detailed FEM volume conductor model based on the ICBM152 average head template for EEG source imaging and TCS targeting.

    PubMed

    Haufe, Stefan; Huang, Yu; Parra, Lucas C

    2015-08-01

    In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.

  10. Upscaling the Navier-Stokes Equation for Turbulent Flows in Porous Media Using a Volume Averaging Method

    NASA Astrophysics Data System (ADS)

    Wood, Brian; He, Xiaoliang; Apte, Sourabh

    2017-11-01

    Turbulent flows through porous media are encountered in a number of natural and engineered systems. Many attempts to close the Navier-Stokes equation for such type of flow have been made, for example using RANS models and double averaging. On the other hand, Whitaker (1996) applied volume averaging theorem to close the macroscopic N-S equation for low Re flow. In this work, the volume averaging theory is extended into the turbulent flow regime to posit a relationship between the macroscale velocities and the spatial velocity statistics in terms of the spatial averaged velocity only. Rather than developing a Reynolds stress model, we propose a simple algebraic closure, consistent with generalized effective viscosity models (Pope 1975), to represent the spatial fluctuating velocity and pressure respectively. The coefficients (one 1st order, two 2nd order and one 3rd order tensor) of the linear functions depend on averaged velocity and gradient. With the data set from DNS, performed with inertial and turbulent flows (pore Re of 300, 500 and 1000) through a periodic face centered cubic (FCC) unit cell, all the unknown coefficients can be computed and the closure is complete. The macroscopic quantity calculated from the averaging is then compared with DNS data to verify the upscaling. NSF Project Numbers 1336983, 1133363.

  11. 40 CFR Table 6 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators on and After [Date to be specified in...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...

  12. 40 CFR Table 6 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators on and After [Date to be specified in...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...

  13. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... part) Hydrogen chloride 62 parts per million by dry volume 3-run average (1 hour minimum sample time...) Sulfur dioxide 20 parts per million by dry volume 3-run average (1 hour minimum sample time per run...-8) or ASTM D6784-02 (Reapproved 2008).c Opacity 10 percent Three 1-hour blocks consisting of ten 6...

  14. Bulk chlorine uptake by polyamide active layers of thin-film composite membranes upon exposure to free chlorine-kinetics, mechanisms, and modeling.

    PubMed

    Powell, Joshua; Luh, Jeanne; Coronell, Orlando

    2014-01-01

    We studied the volume-averaged chlorine (Cl) uptake into the bulk region of the aromatic polyamide active layer of a reverse osmosis membrane upon exposure to free chlorine. Volume-averaged measurements were obtained using Rutherford backscattering spectrometry with samples prepared at a range of free chlorine concentrations, exposure times, and mixing, rinsing, and pH conditions. Our volume-averaged measurements complement previous studies that have quantified Cl uptake at the active layer surface (top ≈ 7 nm) and advance the mechanistic understanding of Cl uptake by aromatic polyamide active layers. Our results show that surface Cl uptake is representative of and underestimates volume-averaged Cl uptake under acidic conditions and alkaline conditions, respectively. Our results also support that (i) under acidic conditions, N-chlorination followed by Orton rearrangement is the dominant Cl uptake mechanism with N-chlorination as the rate-limiting step; (ii) under alkaline conditions, N-chlorination and dechlorination of N-chlorinated amide links by hydroxyl ion are the two dominant processes; and (iii) under neutral pH conditions, the rates of N-chlorination and Orton rearrangement are comparable. We propose a kinetic model that satisfactorily describes Cl uptake under acidic and alkaline conditions, with the largest discrepancies between model and experiment occurring under alkaline conditions at relatively high chlorine exposures.

  15. A Lagrangian Transport Eulerian Reaction Spatial (LATERS) Markov Model for Prediction of Effective Bimolecular Reactive Transport

    NASA Astrophysics Data System (ADS)

    Sund, Nicole; Porta, Giovanni; Bolster, Diogo; Parashar, Rishi

    2017-11-01

    Prediction of effective transport for mixing-driven reactive systems at larger scales, requires accurate representation of mixing at small scales, which poses a significant upscaling challenge. Depending on the problem at hand, there can be benefits to using a Lagrangian framework, while in others an Eulerian might have advantages. Here we propose and test a novel hybrid model which attempts to leverage benefits of each. Specifically, our framework provides a Lagrangian closure required for a volume-averaging procedure of the advection diffusion reaction equation. This hybrid model is a LAgrangian Transport Eulerian Reaction Spatial Markov model (LATERS Markov model), which extends previous implementations of the Lagrangian Spatial Markov model and maps concentrations to an Eulerian grid to quantify closure terms required to calculate the volume-averaged reaction terms. The advantage of this approach is that the Spatial Markov model is known to provide accurate predictions of transport, particularly at preasymptotic early times, when assumptions required by traditional volume-averaging closures are least likely to hold; likewise, the Eulerian reaction method is efficient, because it does not require calculation of distances between particles. This manuscript introduces the LATERS Markov model and demonstrates by example its ability to accurately predict bimolecular reactive transport in a simple benchmark 2-D porous medium.

  16. Estimating merchantable tree volume in Oregon and Washington using stem profile models

    Treesearch

    Raymond L. Czaplewski; Amy S. Brown; Dale G. Guenther

    1989-01-01

    The profile model of Max and Burkhart was fit to eight tree species in the Pacific Northwest Region (Oregon and Washington) of the Forest Service. Most estimates of merchantable volume had an average error less than 10% when applied to independent test data for three national forests.

  17. Effect of Cross-Linking on Free Volume Properties of PEG Based Thiol-Ene Networks

    NASA Astrophysics Data System (ADS)

    Ramakrishnan, Ramesh; Vasagar, Vivek; Nazarenko, Sergei

    According to the Fox and Loshaek theory, in elastomeric networks, free volume decreases linearly with the cross-link density increase. The aim of this study is to show whether the poly(ethylene glycol) (PEG) based multicomponent thiol-ene elastomeric networks demonstrate this model behavior? Networks with a broad cross-link density range were prepared by changing the ratio of the trithiol crosslinker to PEG dithiol and then UV cured with PEG diene while maintaining 1:1 thiol:ene stoichiometry. Pressure-volume-temperature (PVT) data of the networks was generated from the high pressure dilatometry experiments which was fit using the Simha-Somcynsky Equation-of-State analysis to obtain the fractional free volume of the networks. Using Positron Annihilation Lifetime Spectroscopy (PALS) analysis, the average free volume hole size of the networks was also quantified. The fractional free volume and the average free volume hole size showed a linear change with the cross-link density confirming that the Fox and Loshaek theory can be applied to this multicomponent system. Gas diffusivities of the networks showed a good correlation with free volume. A free volume based model was developed to describe the gas diffusivity trends as a function of cross-link density.

  18. An order insertion scheduling model of logistics service supply chain considering capacity and time factors.

    PubMed

    Liu, Weihua; Yang, Yi; Wang, Shuqing; Liu, Yang

    2014-01-01

    Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful.

  19. Effect of tank geometry on its average performance

    NASA Astrophysics Data System (ADS)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  20. How large is the typical subarachnoid hemorrhage? A review of current neurosurgical knowledge.

    PubMed

    Whitmore, Robert G; Grant, Ryan A; LeRoux, Peter; El-Falaki, Omar; Stein, Sherman C

    2012-01-01

    Despite the morbidity and mortality of subarachnoid hemorrhage (SAH), the average volume of a typical hemorrhage is not well defined. Animal models of SAH often do not accurately mimic the human disease process. The purpose of this study is to estimate the average SAH volume, allowing standardization of animal models of the disease. We performed a MEDLINE search of SAH volume and erythrocyte counts in human cerebrospinal fluid as well as for volumes of blood used in animal injection models of SAH, from 1956 to 2010. We polled members of the American Association of Neurological Surgeons (AANS) for estimates of typical SAH volume. Using quantitative data from the literature, we calculated the total volume of SAH as equal to the volume of blood clotted in basal cisterns plus the volume of dispersed blood in cerebrospinal fluid. The results of the AANS poll confirmed our estimates. The human literature yielded 322 publications and animal literature, 237 studies. Four quantitative human studies reported blood clot volumes ranging from 0.2 to 170 mL, with a mean of ∼20 mL. There was only one quantitative study reporting cerebrospinal fluid red blood cell counts from serial lumbar puncture after SAH. Dispersed blood volume ranged from 2.9 to 45.9 mL, and we used the mean of 15 mL for our calculation. Therefore, total volume of SAH equals 35 mL. The AANS poll yielded 176 responses, ranging from 2 to 350 mL, with a mean of 33.9 ± 4.4 mL. Based on our estimate of total SAH volume of 35 mL, animal injection models may now become standardized for more accurate portrayal of the human disease process. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. 40 CFR Table 7 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Energy Recovery Units After May 20, 2011

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... volume Biomass—490 parts per million dry volumeCoal—59 parts per million dry volume 3-run average (1 hour... with a concentration of 1000 ppm or less for biomass-fed boilers. Dioxins/furans (total mass basis) 2.9... million dry volume Biomass—290 parts per million dry volumeCoal—340 parts per million dry volume 3-run...

  2. Correlating Free-Volume Hole Distribution to the Glass Transition Temperature of Epoxy Polymers.

    PubMed

    Aramoon, Amin; Breitzman, Timothy D; Woodward, Christopher; El-Awady, Jaafar A

    2017-09-07

    A new algorithm is developed to quantify the free-volume hole distribution and its evolution in coarse-grained molecular dynamics simulations of polymeric networks. This is achieved by analyzing the geometry of the network rather than a voxelized image of the structure to accurately and efficiently find and quantify free-volume hole distributions within large scale simulations of polymer networks. The free-volume holes are quantified by fitting the largest ellipsoids and spheres in the free-volumes between polymer chains. The free-volume hole distributions calculated from this algorithm are shown to be in excellent agreement with those measured from positron annihilation lifetime spectroscopy (PALS) experiments at different temperature and pressures. Based on the results predicted using this algorithm, an evolution model is proposed for the thermal behavior of an individual free-volume hole. This model is calibrated such that the average radius of free-volumes holes mimics the one predicted from the simulations. The model is then employed to predict the glass-transition temperature of epoxy polymers with different degrees of cross-linking and lengths of prepolymers. Comparison between the predicted glass-transition temperatures and those measured from simulations or experiments implies that this model is capable of successfully predicting the glass-transition temperature of the material using only a PDF of the initial free-volume holes radii of each microstructure. This provides an effective approach for the optimized design of polymeric systems on the basis of the glass-transition temperature, degree of cross-linking, and average length of prepolymers.

  3. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators Before [Date to be specified in state...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...

  4. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators Before [Date to be specified in state...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...

  5. Effect of sample volume on metastable zone width and induction time

    NASA Astrophysics Data System (ADS)

    Kubota, Noriaki

    2012-04-01

    The metastable zone width (MSZW) and the induction time, measured for a large sample (say>0.1 L) are reproducible and deterministic, while, for a small sample (say<1 mL), these values are irreproducible and stochastic. Such behaviors of MSZW and induction time were theoretically discussed both with stochastic and deterministic models. Equations for the distribution of stochastic MSZW and induction time were derived. The average values of stochastic MSZW and induction time both decreased with an increase in sample volume, while, the deterministic MSZW and induction time remained unchanged. Such different behaviors with variation in sample volume were explained in terms of detection sensitivity of crystallization events. The average values of MSZW and induction time in the stochastic model were compared with the deterministic MSZW and induction time, respectively. Literature data reported for paracetamol aqueous solution were explained theoretically with the presented models.

  6. Modelling lidar volume-averaging and its significance to wind turbine wake measurements

    NASA Astrophysics Data System (ADS)

    Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.

    2017-05-01

    Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.

  7. TH-A-BRF-02: BEST IN PHYSICS (JOINT IMAGING-THERAPY) - Modeling Tumor Evolution for Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Lee, CG; Chan, TCY

    2014-06-15

    Purpose: To develop mathematical models of tumor geometry changes under radiotherapy that may support future adaptive paradigms. Methods: A total of 29 cervical patients were scanned using MRI, once for planning and weekly thereafter for treatment monitoring. Using the tumor volumes contoured by a radiologist, three mathematical models were investigated based on the assumption of a stochastic process of tumor evolution. The “weekly MRI” model predicts tumor geometry for the following week from the last two consecutive MRI scans, based on the voxel transition probability. The other two models use only the first pair of consecutive MRI scans, and themore » transition probabilities were estimated via tumor type classified from the entire data set. The classification is based on either measuring the tumor volume (the “weekly volume” model), or implementing an auxiliary “Markov chain” model. These models were compared to a constant volume approach that represents the current clinical practice, using various model parameters; e.g., the threshold probability β converts the probability map into a tumor shape (larger threshold implies smaller tumor). Model performance was measured using volume conformity index (VCI), i.e., the union of the actual target and modeled target volume squared divided by product of these two volumes. Results: The “weekly MRI” model outperforms the constant volume model by 26% on average, and by 103% for the worst 10% of cases in terms of VCI under a wide range of β. The “weekly volume” and “Markov chain” models outperform the constant volume model by 20% and 16% on average, respectively. They also perform better than the “weekly MRI” model when β is large. Conclusion: It has been demonstrated that mathematical models can be developed to predict tumor geometry changes for cervical cancer undergoing radiotherapy. The models can potentially support adaptive radiotherapy paradigm by reducing normal tissue dose. This research was supported in part by the Ontario Consortium for Adaptive Interventions in Radiation Oncology (OCAIRO) funded by the Ontario Research Fund (ORF) and the MITACS Accelerate Internship Program.« less

  8. Volume Averaging Study of the Capacitive Deionization Process in Homogeneous Porous Media

    DOE PAGES

    Gabitto, Jorge; Tsouris, Costas

    2015-05-05

    Ion storage in porous electrodes is important in applications such as energy storage by supercapacitors, water purification by capacitive deionization, extraction of energy from a salinity difference and heavy ion purification. In this paper, a model is presented to simulate the charge process in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. Transport between the electrolyte solution and the chargedmore » wall is described using the Gouy–Chapman–Stern model. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. Finally, the source terms that appear in the average equations are calculated using numerical computations. An alternative way to deal with the source terms is proposed.« less

  9. Estimating total maximum daily loads with the Stochastic Empirical Loading and Dilution Model

    USGS Publications Warehouse

    Granato, Gregory; Jones, Susan Cheung

    2017-01-01

    The Massachusetts Department of Transportation (DOT) and the Rhode Island DOT are assessing and addressing roadway contributions to total maximum daily loads (TMDLs). Example analyses for total nitrogen, total phosphorus, suspended sediment, and total zinc in highway runoff were done by the U.S. Geological Survey in cooperation with FHWA to simulate long-term annual loads for TMDL analyses with the stochastic empirical loading and dilution model known as SELDM. Concentration statistics from 19 highway runoff monitoring sites in Massachusetts were used with precipitation statistics from 11 long-term monitoring sites to simulate long-term pavement yields (loads per unit area). Highway sites were stratified by traffic volume or surrounding land use to calculate concentration statistics for rural roads, low-volume highways, high-volume highways, and ultraurban highways. The median of the event mean concentration statistics in each traffic volume category was used to simulate annual yields from pavement for a 29- or 30-year period. Long-term average yields for total nitrogen, phosphorus, and zinc from rural roads are lower than yields from the other categories, but yields of sediment are higher than for the low-volume highways. The average yields of the selected water quality constituents from high-volume highways are 1.35 to 2.52 times the associated yields from low-volume highways. The average yields of the selected constituents from ultraurban highways are 1.52 to 3.46 times the associated yields from high-volume highways. Example simulations indicate that both concentration reduction and flow reduction by structural best management practices are crucial for reducing runoff yields.

  10. An Order Insertion Scheduling Model of Logistics Service Supply Chain Considering Capacity and Time Factors

    PubMed Central

    Yang, Yi; Wang, Shuqing; Liu, Yang

    2014-01-01

    Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful. PMID:25276851

  11. Geohydrology and numerical simulation of the alluvium and terrace aquifer along the Beaver-North Canadian River from the Panhandle to Canton Lake, northwestern Oklahoma

    USGS Publications Warehouse

    Davis, Robert E.; Christenson, Scott C.

    1981-01-01

    A quantitative description of the hydrologic system in alluvium and terrace deposits along the Beaver-North Canadian River in northwestern Oklahoma is needed as an aid for planning and management of the aquifer. A two-dimensional finite-difference model was used to describe the aquifer and to predict the effects of future ground-water withdrawals.The aquifer principally consists of three geologic units: Alluvium with an average thickness of 30 feet, low terrace deposits with an average thickness of 50 feet, and high terrace deposits with an average thickness of 70 feet. A thin cover of dune sand overlies much of the area and provides an excellent catchment for recharge, but is generally unsaturated.Hydraulic conductivity of the aquifer ranges from 0 to 160 feet per day and averages 59 feet per day. Specific yield is estimated to be 0.29. Recharge to the aquifer is approximately 1 inch annually. Under present conditions (1978), most discharge is the result of ground-water flow to the Beaver-North Canadian River at a rate of 36 cubic feet per second and to pumpage for public-supply, industrial, and irrigation use at a rate of 28 cubic feet per second. In 1978, the aquifer had an average saturated thickness of 31 feet and contained 4.07 million acre-feet of water.The model was used to predict future head response in the aquifer to various pumping stresses. For any one area, the pumping stress was applied until the saturated thickness for that area was less than 5 feet, at which time the pumping ceased.The results of the modeled projections show that if the aquifer is stressed from 1978 to 1993 at the 1977 pumpage rates and well distribution, the average saturated thickness will decrease 1.0 foot and the volume of water in storage will be 3.94 million acre-feet, or 97 percent of the 1978 volume. If the aquifer is stressed at this same rate until 2020, the average saturated thickness will decrease an additional 0.7 foot and the volume of water in storage will be 3.84 million acre-feet, or 94 percent of the 1978 volume.If all areas of the aquifer having a 1978 saturated thickness of 5 feet or more are stressed from 1978 to 1993 at a rate of approximately1.4 acre-feet per acre per year, the average saturated thickness will decrease by 20.9 feet and the volume of water in storage will be 1.28 million acre-feet, or 31 percent of the 1978 volume. If the aquifer is stressed at this same rate until 2020, the average saturated thickness will decrease an additional 2.2 feet and the volume of water in storage will be 980,000 acre-feet, or 24 percent of the 1978 volume.The water in the aquifer is generally of the calcium bicarbonate type and is suitable for most uses. Most of the 30 water samples analyzed contained less than 500 milligrams of dissolved solids per liter.

  12. Synthetic resistivity calculations for the canonical depth-to-bedrock problem: A critical examination of the thin interbed problem and electrical equivalence theories

    NASA Astrophysics Data System (ADS)

    Weiss, C. J.; Knight, R.

    2009-05-01

    One of the key factors in the sensible inference of subsurface geologic properties from both field and laboratory experiments is the ability to quantify the linkages between the inherently fine-scale structures, such as bedding planes and fracture sets, and their macroscopic expression through geophysical interrogation. Central to this idea is the concept of a "minimal sampling volume" over which a given geophysical method responds to an effective medium property whose value is dictated by the geometry and distribution of sub- volume heterogeneities as well as the experiment design. In this contribution we explore the concept of effective resistivity volumes for the canonical depth-to-bedrock problem subject to industry-standard DC resistivity survey designs. Four models representing a sedimentary overburden and flat bedrock interface were analyzed through numerical experiments of six different resistivity arrays. In each of the four models, the sedimentary overburden consists of a thinly interbedded resistive and conductive laminations, with equivalent volume-averaged resistivity but differing lamination thickness, geometry, and layering sequence. The numerical experiments show striking differences in the apparent resistivity pseudo-sections which belie the volume-averaged equivalence of the models. These models constitute the synthetic data set offered for inversion in this Back to Basics Resistivity Modeling session and offer the promise to further our understanding of how the sampling volume, as affected by survey design, can be constrained by joint-array inversion of resistivity data.

  13. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas.

    PubMed

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt

    2016-08-01

    A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. 40 CFR Table 6 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators on and After [Date to be specified in...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10..., appendix A-3 or appendix A-8). Sulfur dioxide 11 parts per million dry volume 3-run average (1 hour minimum... Apply to Incinerators on and After [Date to be specified in state plan] a 6 Table 6 to Subpart DDDD of...

  15. 40 CFR Table 6 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators on and After [Date to be specified in...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10..., appendix A-3 or appendix A-8). Sulfur dioxide 11 parts per million dry volume 3-run average (1 hour minimum... Apply to Incinerators on and After [Date to be specified in state plan] a 6 Table 6 to Subpart DDDD of...

  16. Growth process and model simulation of three different classes of Schima superba in a natural subtropical forest in China

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Deng, Xiangwen; Ouyang, Shuai; Chen, Lijun; Chu, Yonghe

    2017-01-01

    Schima superba is an important fire-resistant, high-quality timber species in southern China. Growth in height, diameter at breast height (DBH), and volume of the three different classes (overtopped, average and dominant) of S. superba were examined in a natural subtropical forest. Four growth models (Richards, edited Weibull, Logistic and Gompertz) were selected to fit the growth of the three different classes of trees. The results showed that there was a fluctuation phenomenon in height and DBH current annual growth process of all three classes. Multiple intersections were found between current annual increment (CAI) and mean annual increment (MAI) curves of both height and DBH, but there was no intersection between volume CAI and MAI curves. All selected models could be used to fit the growth of the three classes of S. superba, with determinant coefficients above 0.9637. However, the edited Weibull model performed best with the highest R2 and the lowest root of mean square error (RMSE). S. superba is a fast-growing tree with a higher growth rate during youth. The height and DBH CAIs of overtopped, average and dominant trees reached growth peaks at ages 5-10, 10-15 and 15-20 years, respectively. According to model simulation, the volume CAIs of overtopped, average and dominant trees reached growth peaks at ages 17, 55 and 76 years, respectively. The biological rotation ages of the overtopped, average and dominant trees of S. superba were 29, 85 and 128 years, respectively.

  17. Equivalent uniform dose concept evaluated by theoretical dose volume histograms for thoracic irradiation.

    PubMed

    Dumas, J L; Lorchel, F; Perrot, Y; Aletti, P; Noel, A; Wolf, D; Courvoisier, P; Bosset, J F

    2007-03-01

    The goal of our study was to quantify the limits of the EUD models for use in score functions in inverse planning software, and for clinical application. We focused on oesophagus cancer irradiation. Our evaluation was based on theoretical dose volume histograms (DVH), and we analyzed them using volumetric and linear quadratic EUD models, average and maximum dose concepts, the linear quadratic model and the differential area between each DVH. We evaluated our models using theoretical and more complex DVHs for the above regions of interest. We studied three types of DVH for the target volume: the first followed the ICRU dose homogeneity recommendations; the second was built out of the first requirements and the same average dose was built in for all cases; the third was truncated by a small dose hole. We also built theoretical DVHs for the organs at risk, in order to evaluate the limits of, and the ways to use both EUD(1) and EUD/LQ models, comparing them to the traditional ways of scoring a treatment plan. For each volume of interest we built theoretical treatment plans with differences in the fractionation. We concluded that both volumetric and linear quadratic EUDs should be used. Volumetric EUD(1) takes into account neither hot-cold spot compensation nor the differences in fractionation, but it is more sensitive to the increase of the irradiated volume. With linear quadratic EUD/LQ, a volumetric analysis of fractionation variation effort can be performed.

  18. An analytical approach to obtaining JWL parameters from cylinder tests

    NASA Astrophysics Data System (ADS)

    Sutton, B. D.; Ferguson, J. W.; Hodgson, A. N.

    2017-01-01

    An analytical method for determining parameters for the JWL Equation of State from cylinder test data is described. This method is applied to four datasets obtained from two 20.3 mm diameter EDC37 cylinder tests. The calculated pressure-relative volume (p-Vr) curves agree with those produced by hydro-code modelling. The average calculated Chapman-Jouguet (CJ) pressure is 38.6 GPa, compared to the model value of 38.3 GPa; the CJ relative volume is 0.729 for both. The analytical pressure-relative volume curves produced agree with the one used in the model out to the commonly reported expansion of 7 relative volumes, as do the predicted energies generated by integrating under the p-Vr curve. The calculated energy is within 1.6% of that predicted by the model.

  19. 40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...

  20. 40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...

  1. Empirical model for the volume-change behavior of debris flows

    USGS Publications Warehouse

    Cannon, S.H.; ,

    1993-01-01

    The potential travel down hillsides; movement stops where the volume-change behavior of flows as they travel down hillsides ; movement stops where the volume of actively flowing debris becomes negligible. The average change in volume over distance for 26 recent debris flows in the Honolulu area was assumed to be a function of the slope over which the debris flow traveled, the degree of flow confinement by the channel, and an assigned value for the type of vegetation through which the debris flow traveled. Analysis of the data yielded a relation that can be incorporated into digital elevation models to characterize debris-flow travel on Oahu.

  2. Modeling of turbulent transport as a volume process

    NASA Technical Reports Server (NTRS)

    Jennings, Mark J.; Morel, Thomas

    1987-01-01

    An alternative type of modeling was proposed for the turbulent transport terms in Reynolds-averaged equations. One particular implementation of the model was considered, based on the two-point velocity correlations. The model was found to reproduce the trends but not the magnitude of the nonisotropic behavior of the turbulent transport. Some interesting insights were developed concerning the shape of the contracted two-point correlation volume. This volume is strongly deformed by mean shear from the spherical shape found in unstrained flows. Of particular interest is the finding that the shape is sharply waisted, indicating preferential lines of communication, which should have a direct effect on turbulent transfer and on other processes.

  3. Nonlinear-regression flow model of the Gulf Coast aquifer systems in the south-central United States

    USGS Publications Warehouse

    Kuiper, L.K.

    1994-01-01

    A multiple-regression methodology was used to help answer questions concerning model reliability, and to calibrate a time-dependent variable-density ground-water flow model of the gulf coast aquifer systems in the south-central United States. More than 40 regression models with 2 to 31 regressions parameters are used and detailed results are presented for 12 of the models. More than 3,000 values for grid-element volume-averaged head and hydraulic conductivity are used for the regression model observations. Calculated prediction interval half widths, though perhaps inaccurate due to a lack of normality of the residuals, are the smallest for models with only four regression parameters. In addition, the root-mean weighted residual decreases very little with an increase in the number of regression parameters. The various models showed considerable overlap between the prediction inter- vals for shallow head and hydraulic conductivity. Approximate 95-percent prediction interval half widths for volume-averaged freshwater head exceed 108 feet; for volume-averaged base 10 logarithm hydraulic conductivity, they exceed 0.89. All of the models are unreliable for the prediction of head and ground-water flow in the deeper parts of the aquifer systems, including the amount of flow coming from the underlying geopressured zone. Truncating the domain of solution of one model to exclude that part of the system having a ground-water density greater than 1.005 grams per cubic centimeter or to exclude that part of the systems below a depth of 3,000 feet, and setting the density to that of freshwater does not appreciably change the results for head and ground-water flow, except for locations close to the truncation surface.

  4. Short-time-scale left ventricular systolic dynamics. Evidence for a common mechanism in both left ventricular chamber and heart muscle mechanics.

    PubMed

    Campbell, K B; Shroff, S G; Kirkpatrick, R D

    1991-06-01

    Based on the premise that short-time-scale, small-amplitude pressure/volume/outflow behavior of the left ventricular chamber was dominated by dynamic processes originating in cardiac myofilaments, a prototype model was built to predict pressure responses to volume perturbations. In the model, chamber pressure was taken to be the product of the number of generators in a pressure-bearing state and their average volumetric distortion, as in the muscle theory of A.F. Huxley, in which force was equal to the number of attached crossbridges and their average lineal distortion. Further, as in the muscle theory, pressure generators were assumed to cycle between two states, the pressure-bearing state and the non-pressure-bearing state. Experiments were performed in the isolated ferret heart, where variable volume decrements (0.01-0.12 ml) were removed at two commanded flow rates (flow clamps, -7 and -14 ml/sec). Pressure responses to volume removals were analyzed. Although the prototype model accounted for most features of the pressure responses, subtle but systematic discrepancies were observed. The presence or absence of flow and the magnitude of flow affected estimates of model parameters. However, estimates of parameters did not differ when the model was fitted to flow clamps with similar magnitudes of flows but different volume changes. Thus, prototype model inadequacies were attributed to misrepresentations of flow-related effects but not of volume-related effects. Based on these discrepancies, an improved model was built that added to the simple two-state cycling scheme, a pathway to a third state. This path was followed only in response to volume change. The improved model eliminated the deficiencies of the prototype model and was adequate in accounting for all observations. Since the template for the improved model was taken from the cycling crossbridge theory of muscle contraction, it was concluded that, in spite of the complexities of geometry, architecture, and regional heterogeneity of function and structure, crossbridge mechanisms dominated the short-time-scale dynamics of left ventricular chamber behavior.

  5. Lung lobe modeling and segmentation with individualized surface meshes

    NASA Astrophysics Data System (ADS)

    Blaffert, Thomas; Barschdorf, Hans; von Berg, Jens; Dries, Sebastian; Franz, Astrid; Klinder, Tobias; Lorenz, Cristian; Renisch, Steffen; Wiemker, Rafael

    2008-03-01

    An automated segmentation of lung lobes in thoracic CT images is of interest for various diagnostic purposes like the quantification of emphysema or the localization of tumors within the lung. Although the separating lung fissures are visible in modern multi-slice CT-scanners, their contrast in the CT-image often does not separate the lobes completely. This makes it impossible to build a reliable segmentation algorithm without additional information. Our approach uses general anatomical knowledge represented in a geometrical mesh model to construct a robust lobe segmentation, which even gives reasonable estimates of lobe volumes if fissures are not visible at all. The paper describes the generation of the lung model mesh including lobes by an average volume model, its adaptation to individual patient data using a special fissure feature image, and a performance evaluation over a test data set showing an average segmentation accuracy of 1 to 3 mm.

  6. Elements of an improved model of debris‐flow motion

    USGS Publications Warehouse

    Iverson, Richard M.

    2009-01-01

    A new depth‐averaged model of debris‐flow motion describes simultaneous evolution of flow velocity and depth, solid and fluid volume fractions, and pore‐fluid pressure. Non‐hydrostatic pore‐fluid pressure is produced by dilatancy, a state‐dependent property that links the depth‐averaged shear rate and volumetric strain rate of the granular phase. Pore‐pressure changes caused by shearing allow the model to exhibit rate‐dependent flow resistance, despite the fact that the basal shear traction involves only rate‐independent Coulomb friction. An analytical solution of simplified model equations shows that the onset of downslope motion can be accelerated or retarded by pore‐pressure change, contingent on whether dilatancy is positive or negative. A different analytical solution shows that such effects will likely be muted if downslope motion continues long enough, because dilatancy then evolves toward zero, and volume fractions and pore pressure concurrently evolve toward steady states.

  7. Elements of an improved model of debris-flow motion

    USGS Publications Warehouse

    Iverson, R.M.

    2009-01-01

    A new depth-averaged model of debris-flow motion describes simultaneous evolution of flow velocity and depth, solid and fluid volume fractions, and pore-fluid pressure. Non-hydrostatic pore-fluid pressure is produced by dilatancy, a state-dependent property that links the depth-averaged shear rate and volumetric strain rate of the granular phase. Pore-pressure changes caused by shearing allow the model to exhibit rate-dependent flow resistance, despite the fact that the basal shear traction involves only rate-independent Coulomb friction. An analytical solution of simplified model equations shows that the onset of downslope motion can be accelerated or retarded by pore-pressure change, contingent on whether dilatancy is positive or negative. A different analytical solution shows that such effects will likely be muted if downslope motion continues long enough, because dilatancy then evolves toward zero, and volume fractions and pore pressure concurrently evolve toward steady states. ?? 2009 American Institute of Physics.

  8. Correlation between physicochemical properties of modified clinoptilolite and its performance in the removal of ammonia-nitrogen.

    PubMed

    Dong, Yingbo; Lin, Hai; He, Yinhai

    2017-03-01

    The physicochemical properties of the 24 modified clinoptilolite samples and their ammonia-nitrogen removal rates were measured to investigate the correlation between them. The modified clinoptilolites obtained by acid modification, alkali modification, salt modification, and thermal modification were used to adsorb ammonia-nitrogen. The surface area, average pore width, macropore volume, mecropore volume, micropore volume, cation exchange capacity (CEC), zeta potential, silicon-aluminum ratios, and ammonia-nitrogen removal rate of the 24 modified clinoptilolite samples were measured. Subsequently, the linear regression analysis method was used to research the correlation between the physicochemical property of the different modified clinoptilolite samples and the ammonia-nitrogen removal rate. Results showed that the CEC was the major physicochemical property affecting the ammonia-nitrogen removal performance. According to the impacts from strong to weak, the order was CEC > silicon-aluminum ratios > mesopore volume > micropore volume > surface area. On the contrary, the macropore volume, average pore width, and zeta potential had a negligible effect on the ammonia-nitrogen removal rate. The relational model of physicochemical property and ammonia-nitrogen removal rate of the modified clinoptilolite was established, which was ammonia-nitrogen removal rate = 1.415[CEC] + 173.533 [macropore volume] + 0.683 [surface area] + 4.789[Si/Al] - 201.248. The correlation coefficient of this model was 0.982, which passed the validation of regression equation and regression coefficients. The results of the significance test showed a good fit to the correlation model.

  9. PARADIGM USING JOINT DETERMINISTIC GRID MODELING AND SUB-GRID VARIABILITY STOCHASTIC DESCRIPTION AS A TEMPLATE FOR MODEL EVALUATION

    EPA Science Inventory

    The goal of achieving verisimilitude of air quality simulations to observations is problematic. Chemical transport models such as the Community Multi-Scale Air Quality (CMAQ) modeling system produce volume averages of pollutant concentration fields. When grid sizes are such tha...

  10. Financial modelling of femtosecond laser-assisted cataract surgery within the National Health Service using a 'hub and spoke' model for the delivery of high-volume cataract surgery.

    PubMed

    Roberts, H W; Ni, M Z; O'Brart, D P S

    2017-03-16

    To develop financial models which offset additional costs associated with femtosecond laser (FL)-assisted cataract surgery (FLACS) against improvements in productivity and to determine important factors relating to its implementation into the National Health Service (NHS). FL platforms are expensive, in initial purchase and running costs. The additional costs associated with FL technology might be offset by an increase in surgical efficiency. Using a 'hub and spoke' model to provide high-volume cataract surgery, we designed a financial model, comparing FLACS against conventional phacoemulsification surgery (CPS). The model was populated with averaged financial data from 4 NHS foundation trusts and 4 commercial organisations manufacturing FL platforms. We tested our model with sensitivity and threshold analyses to allow for variations or uncertainties. The averaged weekly workload for cataract surgery using our hub and spoke model required either 8 or 5.4 theatre sessions with CPS or FLACS, respectively. Despite reduced theatre utilisation, CPS (average £433/case) was still found to be 8.7% cheaper than FLACS (average £502/case). The greatest associated cost of FLACS was the patient interface (PI) (average £135/case). Sensitivity analyses demonstrated that FLACS could be less expensive than CPS, but only if increased efficiency, in terms of cataract procedures per theatre list, increased by over 100%, or if the cost of the PI was reduced by almost 70%. The financial viability of FLACS within the NHS is currently precluded by the cost of the PI and the lack of knowledge regarding any gains in operational efficiency. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  11. Change in brain and lesion volumes after CEE therapies: the WHIMS-MRI studies.

    PubMed

    Coker, Laura H; Espeland, Mark A; Hogan, Patricia E; Resnick, Susan M; Bryan, R Nick; Robinson, Jennifer G; Goveas, Joseph S; Davatzikos, Christos; Kuller, Lewis H; Williamson, Jeff D; Bushnell, Cheryl D; Shumaker, Sally A

    2014-02-04

    To determine whether smaller brain volumes in older women who had completed Women's Health Initiative (WHI)-assigned conjugated equine estrogen-based hormone therapy (HT), reported by WHI Memory Study (WHIMS)-MRI, correspond to a continuing increased rate of atrophy an average of 6.1 to 7.7 years later in WHIMS-MRI2. A total of 1,230 WHI participants were contacted: 797 (64.8%) consented, and 729 (59%) were rescanned an average of 4.7 years after the initial MRI scan. Mean annual rates of change in total brain volume, the primary outcome, and rates of change in ischemic lesion volumes, the secondary outcome, were compared between treatment groups using mixed-effect models with adjustment for trial, clinical site, age, intracranial volumes, and time between MRI measures. Total brain volume decreased an average of 3.22 cm(3)/y in the active arm and 3.07 cm(3)/y in the placebo arm (p = 0.53). Total ischemic lesion volumes increased in both arms at a rate of 0.12 cm(3)/y (p = 0.88). Conjugated equine estrogen-based postmenopausal HT, previously assigned at WHI baseline, did not affect rates of decline in brain volumes or increases in brain lesion volumes during the 4.7 years between the initial and follow-up WHIMS-MRI studies. Smaller frontal lobe volumes were observed as persistent group differences among women assigned to active HT compared with placebo. Women with a history of cardiovascular disease treated with active HT, compared with placebo, had higher rates of accumulation in white matter lesion volume and total brain lesion volume. Further study may elucidate mechanisms that explain these findings.

  12. Change in brain and lesion volumes after CEE therapies

    PubMed Central

    Espeland, Mark A.; Hogan, Patricia E.; Resnick, Susan M.; Bryan, R. Nick; Robinson, Jennifer G.; Goveas, Joseph S.; Davatzikos, Christos; Kuller, Lewis H.; Williamson, Jeff D.; Bushnell, Cheryl D.; Shumaker, Sally A.

    2014-01-01

    Objectives: To determine whether smaller brain volumes in older women who had completed Women's Health Initiative (WHI)-assigned conjugated equine estrogen–based hormone therapy (HT), reported by WHI Memory Study (WHIMS)-MRI, correspond to a continuing increased rate of atrophy an average of 6.1 to 7.7 years later in WHIMS-MRI2. Methods: A total of 1,230 WHI participants were contacted: 797 (64.8%) consented, and 729 (59%) were rescanned an average of 4.7 years after the initial MRI scan. Mean annual rates of change in total brain volume, the primary outcome, and rates of change in ischemic lesion volumes, the secondary outcome, were compared between treatment groups using mixed-effect models with adjustment for trial, clinical site, age, intracranial volumes, and time between MRI measures. Results: Total brain volume decreased an average of 3.22 cm3/y in the active arm and 3.07 cm3/y in the placebo arm (p = 0.53). Total ischemic lesion volumes increased in both arms at a rate of 0.12 cm3/y (p = 0.88). Conclusions: Conjugated equine estrogen–based postmenopausal HT, previously assigned at WHI baseline, did not affect rates of decline in brain volumes or increases in brain lesion volumes during the 4.7 years between the initial and follow-up WHIMS-MRI studies. Smaller frontal lobe volumes were observed as persistent group differences among women assigned to active HT compared with placebo. Women with a history of cardiovascular disease treated with active HT, compared with placebo, had higher rates of accumulation in white matter lesion volume and total brain lesion volume. Further study may elucidate mechanisms that explain these findings. PMID:24384646

  13. Stresses and elastic constants of crystalline sodium, from molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiferl, S.K.

    1985-02-01

    The stresses and the elastic constants of bcc sodium are calculated by molecular dynamics (MD) for temperatures to T = 340K. The total adiabatic potential of a system of sodium atoms is represented by pseudopotential model. The resulting expression has two terms: a large, strictly volume-dependent potential, plus a sum over ion pairs of a small, volume-dependent two-body potential. The stresses and the elastic constants are given as strain derivatives of the Helmholtz free energy. The resulting expressions involve canonical ensemble averages (and fluctuation averages) of the position and volume derivatives of the potential. An ensemble correction relates the resultsmore » to MD equilibrium averages. Evaluation of the potential and its derivatives requires the calculation of integrals with infinite upper limits of integration, and integrand singularities. Methods for calculating these integrals and estimating the effects of integration errors are developed. A method is given for choosing initial conditions that relax quickly to a desired equilibrium state. Statistical methods developed earlier for MD data are extended to evaluate uncertainties in fluctuation averages, and to test for symmetry. 45 refs., 10 figs., 4 tabs.« less

  14. Volumetric Analysis of Alveolar Bone Defect Using Three-Dimensional-Printed Models Versus Computer-Aided Engineering.

    PubMed

    Du, Fengzhou; Li, Binghang; Yin, Ningbei; Cao, Yilin; Wang, Yongqian

    2017-03-01

    Knowing the volume of a graft is essential in repairing alveolar bone defects. This study investigates the 2 advanced preoperative volume measurement methods: three-dimensional (3D) printing and computer-aided engineering (CAE). Ten unilateral alveolar cleft patients were enrolled in this study. Their computed tomographic data were sent to 3D printing and CAE software. A simulated graft was used on the 3D-printed model, and the graft volume was measured by water displacement. The volume calculated by CAE software used mirror-reverses technique. The authors compared the actual volumes of the simulated grafts with the CAE software-derived volumes. The average volume of the simulated bone grafts by 3D-printed models was 1.52 mL, higher than the mean volume of 1.47 calculated by CAE software. The difference between the 2 volumes was from -0.18 to 0.42 mL. The paired Student t test showed no statistically significant difference between the volumes derived from the 2 methods. This study demonstrated that the mirror-reversed technique by CAE software is as accurate as the simulated operation on 3D-printed models in unilateral alveolar cleft patients. These findings further validate the use of 3D printing and CAE technique in alveolar defect repairing.

  15. SoftWAXS: a computational tool for modeling wide-angle X-ray solution scattering from biomolecules.

    PubMed

    Bardhan, Jaydeep; Park, Sanghyun; Makowski, Lee

    2009-10-01

    This paper describes a computational approach to estimating wide-angle X-ray solution scattering (WAXS) from proteins, which has been implemented in a computer program called SoftWAXS. The accuracy and efficiency of SoftWAXS are analyzed for analytically solvable model problems as well as for proteins. Key features of the approach include a numerical procedure for performing the required spherical averaging and explicit representation of the solute-solvent boundary and the surface of the hydration layer. These features allow the Fourier transform of the excluded volume and hydration layer to be computed directly and with high accuracy. This approach will allow future investigation of different treatments of the electron density in the hydration shell. Numerical results illustrate the differences between this approach to modeling the excluded volume and a widely used model that treats the excluded-volume function as a sum of Gaussians representing the individual atomic excluded volumes. Comparison of the results obtained here with those from explicit-solvent molecular dynamics clarifies shortcomings inherent to the representation of solvent as a time-averaged electron-density profile. In addition, an assessment is made of how the calculated scattering patterns depend on input parameters such as the solute-atom radii, the width of the hydration shell and the hydration-layer contrast. These results suggest that obtaining predictive calculations of high-resolution WAXS patterns may require sophisticated treatments of solvent.

  16. 40 CFR Table 6 to Subpart Cccc of... - Emission Limitations for Energy Recovery Units That Commenced Construction After June 4, 2010, or...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Biomass—6.2 parts per million dry volumeCoal—650 parts per million dry volume 3-run average (1 hour... Biomass—290 parts per million dry volumeCoal—340 parts per million dry volume 3-run average (1 hour... volume Biomass—240 parts per million dry volumeCoal—95 parts per million dry volume 3-run average (1 hour...

  17. Study of CdTe quantum dots grown using a two-step annealing method

    NASA Astrophysics Data System (ADS)

    Sharma, Kriti; Pandey, Praveen K.; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.

    2006-02-01

    High size dispersion, large average radius of quantum dot and low-volume ratio has been a major hurdle in the development of quantum dot based devices. In the present paper, we have grown CdTe quantum dots in a borosilicate glass matrix using a two-step annealing method. Results of optical characterization and the theoretical model of absorption spectra have shown that quantum dots grown using two-step annealing have lower average radius, lesser size dispersion, higher volume ratio and higher decrease in bulk free energy as compared to quantum dots grown conventionally.

  18. Estimating Highway Volumes Using Vehicle Probe Data - Proof of Concept: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Yi; Young, Stanley E; Sadabadi, Kaveh

    This paper examines the feasibility of using sampled commercial probe data in combination with validated continuous counter data to accurately estimate vehicle volume across the entire roadway network, for any hour during the year. Currently either real time or archived volume data for roadways at specific times are extremely sparse. Most volume data are average annual daily traffic (AADT) measures derived from the Highway Performance Monitoring System (HPMS). Although methods to factor the AADT to hourly averages for typical day of week exist, actual volume data is limited to a sparse collection of locations in which volumes are continuously recorded.more » This paper explores the use of commercial probe data to generate accurate volume measures that span the highway network providing ubiquitous coverage in space, and specific point-in-time measures for a specific date and time. The paper examines the need for the data, fundamental accuracy limitations based on a basic statistical model that take into account the sampling nature of probe data, and early results from a proof of concept exercise revealing the potential of probe type data calibrated with public continuous count data to meet end user expectations in terms of accuracy of volume estimates.« less

  19. Negating Tissue Contracture Improves Volume Maintenance and Longevity of In Vivo Engineered Tissues.

    PubMed

    Lytle, Ian F; Kozlow, Jeffrey H; Zhang, Wen X; Buffington, Deborah A; Humes, H David; Brown, David L

    2015-10-01

    Engineering large, complex tissues in vivo requires robust vascularization to optimize survival, growth, and function. Previously, the authors used a "chamber" model that promotes intense angiogenesis in vivo as a platform for functional three-dimensional muscle and renal engineering. A silicone membrane used to define the structure and to contain the constructs is successful in the short term. However, over time, generated tissues contract and decrease in size in a manner similar to capsular contracture seen around many commonly used surgical implants. The authors hypothesized that modification of the chamber structure or internal surface would promote tissue adherence and maintain construct volume. Three chamber configurations were tested against volume maintenance. Previously studied, smooth silicone surfaces were compared to chambers modified for improved tissue adherence, with multiple transmembrane perforations or lined with a commercially available textured surface. Tissues were allowed to mature long term in a rat model, before analysis. On explantation, average tissue masses were 49, 102, and 122 mg; average volumes were 74, 158 and 176 μl; and average cross-sectional areas were 1.6, 6.7, and 8.7 mm for the smooth, perforated, and textured groups, respectively. Both perforated and textured designs demonstrated significantly greater measures than the smooth-surfaced constructs in all respects. By modifying the design of chambers supporting vascularized, three-dimensional, in vivo tissue engineering constructs, generated tissue mass, volume, and area can be maintained over a long time course. Successful progress in the scale-up of construct size should follow, leading to improved potential for development of increasingly complex engineered tissues.

  20. Effects of reservoir heterogeneity on scaling of effective mass transfer coefficient for solute transport

    NASA Astrophysics Data System (ADS)

    Leung, Juliana Y.; Srinivasan, Sanjay

    2016-09-01

    Modeling transport process at large scale requires proper scale-up of subsurface heterogeneity and an understanding of its interaction with the underlying transport mechanisms. A technique based on volume averaging is applied to quantitatively assess the scaling characteristics of effective mass transfer coefficient in heterogeneous reservoir models. The effective mass transfer coefficient represents the combined contribution from diffusion and dispersion to the transport of non-reactive solute particles within a fluid phase. Although treatment of transport problems with the volume averaging technique has been published in the past, application to geological systems exhibiting realistic spatial variability remains a challenge. Previously, the authors developed a new procedure where results from a fine-scale numerical flow simulation reflecting the full physics of the transport process albeit over a sub-volume of the reservoir are integrated with the volume averaging technique to provide effective description of transport properties. The procedure is extended such that spatial averaging is performed at the local-heterogeneity scale. In this paper, the transport of a passive (non-reactive) solute is simulated on multiple reservoir models exhibiting different patterns of heterogeneities, and the scaling behavior of effective mass transfer coefficient (Keff) is examined and compared. One such set of models exhibit power-law (fractal) characteristics, and the variability of dispersion and Keff with scale is in good agreement with analytical expressions described in the literature. This work offers an insight into the impacts of heterogeneity on the scaling of effective transport parameters. A key finding is that spatial heterogeneity models with similar univariate and bivariate statistics may exhibit different scaling characteristics because of the influence of higher order statistics. More mixing is observed in the channelized models with higher-order continuity. It reinforces the notion that the flow response is influenced by the higher-order statistical description of heterogeneity. An important implication is that when scaling-up transport response from lab-scale results to the field scale, it is necessary to account for the scale-up of heterogeneity. Since the characteristics of higher-order multivariate distributions and large-scale heterogeneity are typically not captured in small-scale experiments, a reservoir modeling framework that captures the uncertainty in heterogeneity description should be adopted.

  1. The capability of radial basis function to forecast the volume fractions of the annular three-phase flow of gas-oil-water.

    PubMed

    Roshani, G H; Karami, A; Salehizadeh, A; Nazemi, E

    2017-11-01

    The problem of how to precisely measure the volume fractions of oil-gas-water mixtures in a pipeline remains as one of the main challenges in the petroleum industry. This paper reports the capability of Radial Basis Function (RBF) in forecasting the volume fractions in a gas-oil-water multiphase system. Indeed, in the present research, the volume fractions in the annular three-phase flow are measured based on a dual energy metering system including the 152 Eu and 137 Cs and one NaI detector, and then modeled by a RBF model. Since the summation of volume fractions are constant (equal to 100%), therefore it is enough for the RBF model to forecast only two volume fractions. In this investigation, three RBF models are employed. The first model is used to forecast the oil and water volume fractions. The next one is utilized to forecast the water and gas volume fractions, and the last one to forecast the gas and oil volume fractions. In the next stage, the numerical data obtained from MCNP-X code must be introduced to the RBF models. Then, the average errors of these three models are calculated and compared. The model which has the least error is picked up as the best predictive model. Based on the results, the best RBF model, forecasts the oil and water volume fractions with the mean relative error of less than 0.5%, which indicates that the RBF model introduced in this study ensures an effective enough mechanism to forecast the results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Dynamic heart model for the mathematical cardiac torso (MCAT) phantom to represent the invariant total heart volume

    NASA Astrophysics Data System (ADS)

    Pretorius, P. H.; King, Michael A.; Tsui, Benjamin M.; LaCroix, Karen; Xia, Weishi

    1998-07-01

    This manuscript documents the alteration of the heart model of the MCAT phantom to better represent cardiac motion. The objective of the inclusion of motion was to develop a digital simulation of the heart such that the impact of cardiac motion on single photon emission computed tomography (SPECT) imaging could be assessed and methods of quantitating cardiac function could be investigated. The motion of the dynamic MCAT's heart is modeled by a 128 time frame volume curve. Eight time frames are averaged together to obtain a gated perfusion acquisition of 16 time frames and ensure motion within every time frame. The position of the MCAT heart was changed during contraction to rotate back and forth around the long axis through the center of the left ventricle (LV) using the end systolic time frame as turning point. Simple respiratory motion was also introduced by changing the orientation of the heart model in a 2 dimensional (2D) plane with every time frame. The averaging effect of respiratory motion in a specific time frame was modeled by randomly selecting multiple heart locations between two extreme orientations. Non-gated perfusion phantoms were also generated by averaging over all time frames. Maximal chamber volumes were selected to fit a profile of a normal healthy person. These volumes were changed during contraction of the ventricles such that the increase in volume in the atria compensated for the decrease in volume in the ventricles. The myocardium were modeled to represent shortening of muscle fibers during contraction with the base of the ventricles moving towards a static apex. The apical region was modeled with moderate wall thinning present while myocardial mass was conserved. To test the applicability of the dynamic heart model, myocardial wall thickening was measured using maximum counts and full width half maximum measurements, and compared with published trends. An analytical 3D projector, with attenuation and detector response included, was used to generate radionuclide projection data sets. After reconstruction a linear relationship was obtained between maximum myocardial counts and myocardium thickness, similar to published results. A numeric difference in values from different locations exist due to different amounts of attenuation present. Similar results were obtained for FWHM measurements. Also, a hot apical region on the polar maps without attenuation compensation turns into an apical defect with attenuation compensation. The apical decrease was more prominent in ED than ES due to the change in the partial volume effect. Both of these agree with clinical trends. It is concluded that the dynamic MCAT (dMCAT) phantom can be used to study the influence of various physical parameters on radionuclide perfusion imaging.

  3. A hybrid ARIMA and neural network model applied to forecast catch volumes of Selar crumenophthalmus

    NASA Astrophysics Data System (ADS)

    Aquino, Ronald L.; Alcantara, Nialle Loui Mar T.; Addawe, Rizavel C.

    2017-11-01

    The Selar crumenophthalmus with the English name big-eyed scad fish, locally known as matang-baka, is one of the fishes commonly caught along the waters of La Union, Philippines. The study deals with the forecasting of catch volumes of big-eyed scad fish for commercial consumption. The data used are quarterly caught volumes of big-eyed scad fish from 2002 to first quarter of 2017. This actual data is available from the open stat database published by the Philippine Statistics Authority (PSA)whose task is to collect, compiles, analyzes and publish information concerning different aspects of the Philippine setting. Autoregressive Integrated Moving Average (ARIMA) models, Artificial Neural Network (ANN) model and the Hybrid model consisting of ARIMA and ANN were developed to forecast catch volumes of big-eyed scad fish. Statistical errors such as Mean Absolute Errors (MAE) and Root Mean Square Errors (RMSE) were computed and compared to choose the most suitable model for forecasting the catch volume for the next few quarters. A comparison of the results of each model and corresponding statistical errors reveals that the hybrid model, ARIMA-ANN (2,1,2)(6:3:1), is the most suitable model to forecast the catch volumes of the big-eyed scad fish for the next few quarters.

  4. Heat transfer measurements for Stirling machine cylinders

    NASA Technical Reports Server (NTRS)

    Kornhauser, Alan A.; Kafka, B. C.; Finkbeiner, D. L.; Cantelmi, F. C.

    1994-01-01

    The primary purpose of this study was to measure the effects of inflow-produced heat turbulence on heat transfer in Stirling machine cylinders. A secondary purpose was to provide new experimental information on heat transfer in gas springs without inflow. The apparatus for the experiment consisted of a varying-volume piston-cylinder space connected to a fixed volume space by an orifice. The orifice size could be varied to adjust the level of inflow-produced turbulence, or the orifice plate could be removed completely so as to merge the two spaces into a single gas spring space. Speed, cycle mean pressure, overall volume ratio, and varying volume space clearance ratio could also be adjusted. Volume, pressure in both spaces, and local heat flux at two locations were measured. The pressure and volume measurements were used to calculate area averaged heat flux, heat transfer hysteresis loss, and other heat transfer-related effects. Experiments in the one space arrangement extended the range of previous gas spring tests to lower volume ratio and higher nondimensional speed. The tests corroborated previous results and showed that analytic models for heat transfer and loss based on volume ratio approaching 1 were valid for volume ratios ranging from 1 to 2, a range covering most gas springs in Stirling machines. Data from experiments in the two space arrangement were first analyzed based on lumping the two spaces together and examining total loss and averaged heat transfer as a function of overall nondimensional parameter. Heat transfer and loss were found to be significantly increased by inflow-produced turbulence. These increases could be modeled by appropriate adjustment of empirical coefficients in an existing semi-analytic model. An attempt was made to use an inverse, parameter optimization procedure to find the heat transfer in each of the two spaces. This procedure was successful in retrieving this information from simulated pressure-volume data with artificially generated noise, but it failed with the actual experimental data. This is evidence that the models used in the parameter optimization procedure (and to generate the simulated data) were not correct. Data from the surface heat flux sensors indicated that the primary shortcoming of these models was that they assumed turbulence levels to be constant over the cycle. Sensor data in the varying volume space showed a large increase in heat flux, probably due to turbulence, during the expansion stroke.

  5. Experience-based quality control of clinical intensity-modulated radiotherapy planning.

    PubMed

    Moore, Kevin L; Brame, R Scott; Low, Daniel A; Mutic, Sasa

    2011-10-01

    To incorporate a quality control tool, according to previous planning experience and patient-specific anatomic information, into the intensity-modulated radiotherapy (IMRT) plan generation process and to determine whether the tool improved treatment plan quality. A retrospective study of 42 IMRT plans demonstrated a correlation between the fraction of organs at risk (OARs) overlapping the planning target volume and the mean dose. This yielded a model, predicted dose = prescription dose (0.2 + 0.8 [1 - exp(-3 overlapping planning target volume/volume of OAR)]), that predicted the achievable mean doses according to the planning target volume overlap/volume of OAR and the prescription dose. The model was incorporated into the planning process by way of a user-executable script that reported the predicted dose for any OAR. The script was introduced to clinicians engaged in IMRT planning and deployed thereafter. The script's effect was evaluated by tracking δ = (mean dose-predicted dose)/predicted dose, the fraction by which the mean dose exceeded the model. All OARs under investigation (rectum and bladder in prostate cancer; parotid glands, esophagus, and larynx in head-and-neck cancer) exhibited both smaller δ and reduced variability after script implementation. These effects were substantial for the parotid glands, for which the previous δ = 0.28 ± 0.24 was reduced to δ = 0.13 ± 0.10. The clinical relevance was most evident in the subset of cases in which the parotid glands were potentially salvageable (predicted dose <30 Gy). Before script implementation, an average of 30.1 Gy was delivered to the salvageable cases, with an average predicted dose of 20.3 Gy. After implementation, an average of 18.7 Gy was delivered to salvageable cases, with an average predicted dose of 17.2 Gy. In the prostate cases, the rectum model excess was reduced from δ = 0.28 ± 0.20 to δ = 0.07 ± 0.15. On surveying dosimetrists at the end of the study, most reported that the script both improved their IMRT planning (8 of 10) and increased their efficiency (6 of 10). This tool proved successful in increasing normal tissue sparing and reducing interclinician variability, providing effective quality control of the IMRT plan development process. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Validation of a White-light 3D Body Volume Scanner to Assess Body Composition.

    PubMed

    Medina-Inojosa, Jose; Somers, Virend; Jenkins, Sarah; Zundel, Jennifer; Johnson, Lynne; Grimes, Chassidy; Lopez-Jimenez, Francisco

    2017-01-01

    Estimating body fat content has shown to be a better predictor of adiposity-related cardiovascular risk than the commonly used body mass index (BMI). The white-light 3D body volume index (BVI) scanner is a non-invasive device normally used in the clothing industry to assess body shapes and sizes. We assessed the hypothesis that volume obtained by BVI is comparable to the volume obtained by air displacement plethysmography (Bod-Pod) and thus capable of assessing body fat mass using the bi-compartmental principles of body composition. We compared BVI to Bod-pod, a validated bicompartmental method to assess body fat percent that uses pressure/volume relationships in isothermal conditions to estimate body volume. Volume is then used to calculate body density (BD) applying the formula density=Body Mass/Volume. Body fat mass percentage is then calculated using the Siri formula (4.95/BD - 4.50) × 100. Subjects were undergoing a wellness evaluation. Measurements from both devices were obtained the same day. A prediction model for total Bod-pod volume was developed using linear regression based on 80% of the observations (N=971), as follows: Predicted Bod-pod Volume (L)=9.498+0.805*(BVI volume, L)-0.0411*(Age, years)-3.295*(Male=0, Female=1)+0.0554*(BVI volume, L)*(Male=0, Female=1)+0.0282*(Age, years)*(Male=0, Female=1). Predictions for Bod-pod volume based on the estimated model were then calculated for the remaining 20% (N=243) and compared to the volume measured by the Bod-pod. Mean age among the 971 individuals was 41.5 ± 12.9 years, 39.4% were men, weight 81.6 ± 20.9 kg, BMI was 27.8 ± 6.3kg/m 2 . Average difference between volume measured by Bod-pod- predicted volume by BVI was 0.0 L, median: -0.4 L, IQR: -1.8 L to 1.5 L, R2=0.9845. Average difference between body fat measured-predicted was-1%, median: -2.7%, IQR: -13.2 to 9.9, R2=0.9236. Volume and BFM can be estimated by using volume measurements obtained by a white- light 3D body scanner and the prediction model developed in this study.

  7. Analytical Computation of Effective Grid Parameters for the Finite-Difference Seismic Waveform Modeling With the PREM, IASP91, SP6, and AK135

    NASA Astrophysics Data System (ADS)

    Toyokuni, G.; Takenaka, H.

    2007-12-01

    We propose a method to obtain effective grid parameters for the finite-difference (FD) method with standard Earth models using analytical ways. In spite of the broad use of the heterogeneous FD formulation for seismic waveform modeling, accurate treatment of material discontinuities inside the grid cells has been a serious problem for many years. One possible way to solve this problem is to introduce effective grid elastic moduli and densities (effective parameters) calculated by the volume harmonic averaging of elastic moduli and volume arithmetic averaging of density in grid cells. This scheme enables us to put a material discontinuity into an arbitrary position in the spatial grids. Most of the methods used for synthetic seismogram calculation today receives the blessing of the standard Earth models, such as the PREM, IASP91, SP6, and AK135, represented as functions of normalized radius. For the FD computation of seismic waveform with such models, we first need accurate treatment of material discontinuities in radius. This study provides a numerical scheme for analytical calculations of the effective parameters for an arbitrary spatial grids in radial direction as to these major four standard Earth models making the best use of their functional features. This scheme can analytically obtain the integral volume averages through partial fraction decompositions (PFDs) and integral formulae. We have developed a FORTRAN subroutine to perform the computations, which is opened to utilization in a large variety of FD schemes ranging from 1-D to 3-D, with conventional- and staggered-grids. In the presentation, we show some numerical examples displaying the accuracy of the FD synthetics simulated with the analytical effective parameters.

  8. Estimating tree bole volume using artificial neural network models for four species in Turkey.

    PubMed

    Ozçelik, Ramazan; Diamantopoulou, Maria J; Brooks, John R; Wiant, Harry V

    2010-01-01

    Tree bole volumes of 89 Scots pine (Pinus sylvestris L.), 96 Brutian pine (Pinus brutia Ten.), 107 Cilicica fir (Abies cilicica Carr.) and 67 Cedar of Lebanon (Cedrus libani A. Rich.) trees were estimated using Artificial Neural Network (ANN) models. Neural networks offer a number of advantages including the ability to implicitly detect complex nonlinear relationships between input and output variables, which is very helpful in tree volume modeling. Two different neural network architectures were used and produced the Back propagation (BPANN) and the Cascade Correlation (CCANN) Artificial Neural Network models. In addition, tree bole volume estimates were compared to other established tree bole volume estimation techniques including the centroid method, taper equations, and existing standard volume tables. An overview of the features of ANNs and traditional methods is presented and the advantages and limitations of each one of them are discussed. For validation purposes, actual volumes were determined by aggregating the volumes of measured short sections (average 1 meter) of the tree bole using Smalian's formula. The results reported in this research suggest that the selected cascade correlation artificial neural network (CCANN) models are reliable for estimating the tree bole volume of the four examined tree species since they gave unbiased results and were superior to almost all methods in terms of error (%) expressed as the mean of the percentage errors. 2009 Elsevier Ltd. All rights reserved.

  9. A representation of an NTCP function for local complication mechanisms

    NASA Astrophysics Data System (ADS)

    Alber, M.; Nüsslin, F.

    2001-02-01

    A mathematical formalism was tailored for the description of mechanisms complicating radiation therapy with a predominantly local component. The functional representation of an NTCP function was developed based on the notion that it has to be robust against population averages in order to be applicable to experimental data. The model was required to be invariant under scaling operations of the dose and the irradiated volume. The NTCP function was derived from the model assumptions that the complication is a consequence of local tissue damage and that the probability of local damage in a small reference volume is independent of the neighbouring volumes. The performance of the model was demonstrated with an animal model which has been published previously (Powers et al 1998 Radiother. Oncol. 46 297-306).

  10. Evaluation and modification of five techniques for estimating stormwater runoff for watersheds in west-central Florida

    USGS Publications Warehouse

    Trommer, J.T.; Loper, J.E.; Hammett, K.M.

    1996-01-01

    Several traditional techniques have been used for estimating stormwater runoff from ungaged watersheds. Applying these techniques to water- sheds in west-central Florida requires that some of the empirical relationships be extrapolated beyond tested ranges. As a result, there is uncertainty as to the accuracy of these estimates. Sixty-six storms occurring in 15 west-central Florida watersheds were initially modeled using the Rational Method, the U.S. Geological Survey Regional Regression Equations, the Natural Resources Conservation Service TR-20 model, the U.S. Army Corps of Engineers Hydrologic Engineering Center-1 model, and the Environmental Protection Agency Storm Water Management Model. The techniques were applied according to the guidelines specified in the user manuals or standard engineering textbooks as though no field data were available and the selection of input parameters was not influenced by observed data. Computed estimates were compared with observed runoff to evaluate the accuracy of the techniques. One watershed was eliminated from further evaluation when it was determined that the area contributing runoff to the stream varies with the amount and intensity of rainfall. Therefore, further evaluation and modification of the input parameters were made for only 62 storms in 14 watersheds. Runoff ranged from 1.4 to 99.3 percent percent of rainfall. The average runoff for all watersheds included in this study was about 36 percent of rainfall. The average runoff for the urban, natural, and mixed land-use watersheds was about 41, 27, and 29 percent, respectively. Initial estimates of peak discharge using the rational method produced average watershed errors that ranged from an underestimation of 50.4 percent to an overestimation of 767 percent. The coefficient of runoff ranged from 0.20 to 0.60. Calibration of the technique produced average errors that ranged from an underestimation of 3.3 percent to an overestimation of 1.5 percent. The average calibrated coefficient of runoff for each watershed ranged from 0.02 to 0.72. The average values of the coefficient of runoff necessary to calibrate the urban, natural, and mixed land-use watersheds were 0.39, 0.16, and 0.08, respectively. The U.S. Geological Survey regional regression equations for determining peak discharge produced errors that ranged from an underestimation of 87.3 percent to an over- estimation of 1,140 percent. The regression equations for determining runoff volume produced errors that ranged from an underestimation of 95.6 percent to an overestimation of 324 percent. Regression equations developed from data used for this study produced errors that ranged between an underestimation of 82.8 percent and an over- estimation of 328 percent for peak discharge, and from an underestimation of 71.2 percent to an overestimation of 241 percent for runoff volume. Use of the equations developed for west-central Florida streams produced average errors for each type of watershed that were lower than errors associated with use of the U.S. Geological Survey equations. Initial estimates of peak discharges and runoff volumes using the Natural Resources Conservation Service TR-20 model, produced average errors of 44.6 and 42.7 percent respectively, for all the watersheds. Curve numbers and times of concentration were adjusted to match estimated and observed peak discharges and runoff volumes. The average change in the curve number for all the watersheds was a decrease of 2.8 percent. The average change in the time of concentration was an increase of 59.2 percent. The shape of the input dimensionless unit hydrograph also had to be adjusted to match the shape and peak time of the estimated and observed flood hydrographs. Peak rate factors for the modified input dimensionless unit hydrographs ranged from 162 to 454. The mean errors for peak discharges and runoff volumes were reduced to 18.9 and 19.5 percent, respectively, using the average calibrated input parameters for ea

  11. Comparison of up-scaling methods in poroelasticity and its generalizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berryman, J G

    2003-12-13

    Four methods of up-scaling coupled equations at the microscale to equations valid at the mesoscale and/or macroscale for fluid-saturated and partially saturated porous media will be discussed, compared, and contrasted. The four methods are: (1) effective medium theory, (2) mixture theory, (3) two-scale and multiscale homogenization, and (4) volume averaging. All these methods have advantages for some applications and disadvantages for others. For example, effective medium theory, mixture theory, and homogenization methods can all give formulas for coefficients in the up-scaled equations, whereas volume averaging methods give the form of the up-scaled equations but generally must be supplemented with physicalmore » arguments and/or data in order to determine the coefficients. Homogenization theory requires a great deal of mathematical insight from the user in order to choose appropriate scalings for use in the resulting power-law expansions, while volume averaging requires more physical insight to motivate the steps needed to find coefficients. Homogenization often is performed on periodic models, while volume averaging does not require any assumption of periodicity and can therefore be related very directly to laboratory and/or field measurements. Validity of the homogenization process is often limited to specific ranges of frequency - in order to justify the scaling hypotheses that must be made - and therefore cannot be used easily over wide ranges of frequency. However, volume averaging methods can quite easily be used for wide band data analysis. So, we learn from these comparisons that a researcher in the theory of poroelasticity and its generalizations needs to be conversant with two or more of these methods to solve problems generally.« less

  12. Effect of particle- and specimen-level transport on product state in compacted-powder combustion synthesis and thermal debinding of polymers from molded powders

    NASA Astrophysics Data System (ADS)

    Oliveira, Amir Antonio Martins

    The existence of large gradients within particles and fast temporal variations in the temperature and species concentration prevents the use of asymptotic approximations for the closure of the volume-averaged, specimen-level formulations. In this case a solution of the particle-level transport problem is needed to complement the specimen-level volume-averaged equations. Here, the use of combined specimen-level and particle-level models for transport in reactive porous media is demonstrated with two examples. For the gasless compacted-powder combustion synthesis, a three-scale model is developed. The specimen-level model is based on the volume-averaged equations for species and temperature. Local thermal equilibrium is assumed and the macroscopic mass diffusion and convection fluxes are neglected. The particle-level model accounts for the interparticle diffusion (i.e., the liquid migration from liquid-rich to liquid-lean regions) and the intraparticle diffusion (i.e., the species mass diffusion within the product layer formed at the surface of the high melting temperature component). It is found that the interparticle diffusion controls the extent of conversion to the final product, the maximum temperature, and to a smaller degree the propagation velocity. The intraparticle diffusion controls the propagation velocity and to a smaller degree the maximum temperature. The initial stages of thermal degradation of EVA from molded specimens is modeled using volume-averaged equations for the species and empirical models for the kinetics of the thermal degradation, the vapor-liquid equilibrium, and the diffusion coefficient of acetic acid in the molten polymer. It is assumed that a bubble forms when the partial pressure of acetic acid exceeds the external ambient pressure. It is found that the removal of acetic acid is characterized by two regimes, a pre-charge dominated regime and a generation dominated regime. For the development of an optimum debinding schedule, the heating rate is modulated to avoid bubbling, while the concentration and temperature follow the bubble-point line for the mixture. The results show a strong dependence on the presence of a pre-charge. It is shown that isolation of the pre-charge effect by using temporary lower heating rates results in an optimum schedule for which the process time is reduced by over 70% when compared to a constant heating rate schedule.

  13. A High-Order Finite Spectral Volume Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Wang, Z. J.; Liu, Yen; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A time accurate, high-order, conservative, yet efficient method named Finite Spectral Volume (FSV) is developed for conservation laws on unstructured grids. The concept of a 'spectral volume' is introduced to achieve high-order accuracy in an efficient manner similar to spectral element and multi-domain spectral methods. In addition, each spectral volume is further sub-divided into control volumes (CVs), and cell-averaged data from these control volumes is used to reconstruct a high-order approximation in the spectral volume. Riemann solvers are used to compute the fluxes at spectral volume boundaries. Then cell-averaged state variables in the control volumes are updated independently. Furthermore, TVD (Total Variation Diminishing) and TVB (Total Variation Bounded) limiters are introduced in the FSV method to remove/reduce spurious oscillations near discontinuities. A very desirable feature of the FSV method is that the reconstruction is carried out only once, and analytically, and is the same for all cells of the same type, and that the reconstruction stencil is always non-singular, in contrast to the memory and CPU-intensive reconstruction in a high-order finite volume (FV) method. Discussions are made concerning why the FSV method is significantly more efficient than high-order finite volume and the Discontinuous Galerkin (DG) methods. Fundamental properties of the FSV method are studied and high-order accuracy is demonstrated for several model problems with and without discontinuities.

  14. Optimization of critical quality attributes in continuous twin-screw wet granulation via design space validated with pilot scale experimental data.

    PubMed

    Liu, Huolong; Galbraith, S C; Ricart, Brendon; Stanton, Courtney; Smith-Goettler, Brandye; Verdi, Luke; O'Connor, Thomas; Lee, Sau; Yoon, Seongkyu

    2017-06-15

    In this study, the influence of key process variables (screw speed, throughput and liquid to solid (L/S) ratio) of a continuous twin screw wet granulation (TSWG) was investigated using a central composite face-centered (CCF) experimental design method. Regression models were developed to predict the process responses (motor torque, granule residence time), granule properties (size distribution, volume average diameter, yield, relative width, flowability) and tablet properties (tensile strength). The effects of the three key process variables were analyzed via contour and interaction plots. The experimental results have demonstrated that all the process responses, granule properties and tablet properties are influenced by changing the screw speed, throughput and L/S ratio. The TSWG process was optimized to produce granules with specific volume average diameter of 150μm and the yield of 95% based on the developed regression models. A design space (DS) was built based on volume average granule diameter between 90 and 200μm and the granule yield larger than 75% with a failure probability analysis using Monte Carlo simulations. Validation experiments successfully validated the robustness and accuracy of the DS generated using the CCF experimental design in optimizing a continuous TSWG process. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. The cost determinants of routine infant immunization services: a meta-regression analysis of six country studies.

    PubMed

    Menzies, Nicolas A; Suharlim, Christian; Geng, Fangli; Ward, Zachary J; Brenzel, Logan; Resch, Stephen C

    2017-10-06

    Evidence on immunization costs is a critical input for cost-effectiveness analysis and budgeting, and can describe variation in site-level efficiency. The Expanded Program on Immunization Costing and Financing (EPIC) Project represents the largest investigation of immunization delivery costs, collecting empirical data on routine infant immunization in Benin, Ghana, Honduras, Moldova, Uganda, and Zambia. We developed a pooled dataset from individual EPIC country studies (316 sites). We regressed log total costs against explanatory variables describing service volume, quality, access, other site characteristics, and income level. We used Bayesian hierarchical regression models to combine data from different countries and account for the multi-stage sample design. We calculated output elasticity as the percentage increase in outputs (service volume) for a 1% increase in inputs (total costs), averaged across the sample in each country, and reported first differences to describe the impact of other predictors. We estimated average and total cost curves for each country as a function of service volume. Across countries, average costs per dose ranged from $2.75 to $13.63. Average costs per child receiving diphtheria, tetanus, and pertussis ranged from $27 to $139. Within countries costs per dose varied widely-on average, sites in the highest quintile were 440% more expensive than those in the lowest quintile. In each country, higher service volume was strongly associated with lower average costs. A doubling of service volume was associated with a 19% (95% interval, 4.0-32) reduction in costs per dose delivered, (range 13% to 32% across countries), and the largest 20% of sites in each country realized costs per dose that were on average 61% lower than those for the smallest 20% of sites, controlling for other factors. Other factors associated with higher costs included hospital status, provision of outreach services, share of effort to management, level of staff training/seniority, distance to vaccine collection, additional days open per week, greater vaccination schedule completion, and per capita gross domestic product. We identified multiple features of sites and their operating environment that were associated with differences in average unit costs, with service volume being the most influential. These findings can inform efforts to improve the efficiency of service delivery and better understand resource needs.

  16. Foundational Performance Analyses of Pressure Gain Combustion Thermodynamic Benefits for Gas Turbines

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Kaemming, Thomas A.

    2012-01-01

    A methodology is described whereby the work extracted by a turbine exposed to the fundamentally nonuniform flowfield from a representative pressure gain combustor (PGC) may be assessed. The method uses an idealized constant volume cycle, often referred to as an Atkinson or Humphrey cycle, to model the PGC. Output from this model is used as input to a scalable turbine efficiency function (i.e., a map), which in turn allows for the calculation of useful work throughout the cycle. Integration over the entire cycle yields mass-averaged work extraction. The unsteady turbine work extraction is compared to steady work extraction calculations based on various averaging techniques for characterizing the combustor exit pressure and temperature. It is found that averages associated with momentum flux (as opposed to entropy or kinetic energy) provide the best match. This result suggests that momentum-based averaging is the most appropriate figure-of-merit to use as a PGC performance metric. Using the mass-averaged work extraction methodology, it is also found that the design turbine pressure ratio for maximum work extraction is significantly higher than that for a turbine fed by a constant pressure combustor with similar inlet conditions and equivalence ratio. Limited results are presented whereby the constant volume cycle is replaced by output from a detonation-based PGC simulation. The results in terms of averaging techniques and design pressure ratio are similar.

  17. An upscaled two-equation model of transport in porous media through unsteady-state closure of volume averaged formulations

    NASA Astrophysics Data System (ADS)

    Chaynikov, S.; Porta, G.; Riva, M.; Guadagnini, A.

    2012-04-01

    We focus on a theoretical analysis of nonreactive solute transport in porous media through the volume averaging technique. Darcy-scale transport models based on continuum formulations typically include large scale dispersive processes which are embedded in a pore-scale advection diffusion equation through a Fickian analogy. This formulation has been extensively questioned in the literature due to its inability to depict observed solute breakthrough curves in diverse settings, ranging from the laboratory to the field scales. The heterogeneity of the pore-scale velocity field is one of the key sources of uncertainties giving rise to anomalous (non-Fickian) dispersion in macro-scale porous systems. Some of the models which are employed to interpret observed non-Fickian solute behavior make use of a continuum formulation of the porous system which assumes a two-region description and includes a bimodal velocity distribution. A first class of these models comprises the so-called ''mobile-immobile'' conceptualization, where convective and dispersive transport mechanisms are considered to dominate within a high velocity region (mobile zone), while convective effects are neglected in a low velocity region (immobile zone). The mass exchange between these two regions is assumed to be controlled by a diffusive process and is macroscopically described by a first-order kinetic. An extension of these ideas is the two equation ''mobile-mobile'' model, where both transport mechanisms are taken into account in each region and a first-order mass exchange between regions is employed. Here, we provide an analytical derivation of two region "mobile-mobile" meso-scale models through a rigorous upscaling of the pore-scale advection diffusion equation. Among the available upscaling methodologies, we employ the Volume Averaging technique. In this approach, the heterogeneous porous medium is supposed to be pseudo-periodic, and can be represented through a (spatially) periodic unit cell. Consistently with the two-region model working hypotheses, we subdivide the pore space into two volumes, which we select according to the features of the local micro-scale velocity field. Assuming separation of the scales, the mathematical development associated with the averaging method in the two volumes leads to a generalized two-equation model. The final (upscaled) formulation includes the standard first order mass exchange term together with additional terms, which we discuss. Our developments allow to identify the assumptions which are usually implicitly embedded in the usual adoption of a two region mobile-mobile model. All macro-scale properties introduced in this model can be determined explicitly from the pore-scale geometry and hydrodynamics through the solution of a set of closure equations. We pursue here an unsteady closure of the problem, leading to the occurrence of nonlocal (in time) terms in the upscaled system of equations. We provide the solution of the closure problems for a simple application documenting the time dependent and the asymptotic behavior of the system.

  18. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B, of appendix A of this part) Dioxins/furans...

  19. Traffic forecasting report : 2007.

    DOT National Transportation Integrated Search

    2008-05-01

    This is the sixth edition of the Traffic Forecasting Report (TFR). This edition of the TFR contains the latest (predominantly 2007) forecasting/modeling data as follows: : Functional class average traffic volume growth rates and trends : Vehi...

  20. The fatigue life study of polyphenylene sulfide composites filled with continuous glass fibers

    NASA Astrophysics Data System (ADS)

    Ye, Junjie; Hong, Yun; Wang, Yongkun; Zhai, Zhi; Shi, Baoquan; Chen, Xuefeng

    2018-04-01

    In this study, an effective microscopic model is proposed to investigate the fatigue life of composites containing continuous glass fibers, which is surrounded by polyphenylene sulfide (PPS) matrix materials. The representative volume element is discretized by parametric elements. Moreover, the microscopic model is established by employing the relation between average surface displacements and average surface tractions. Based on the experimental data, the required fatigue failure parameters of the PPS are determined. Two different fiber arrangements are considered for comparisons. Numerical analyses indicated that the square edge packing provides a more accuracy. In addition, microscopic structural parameters (fiber volume fraction, fiber off-axis angle) effect on the fatigue life of Glass/PPS composites is further discussed. It is revealed that fiber strength degradation effects on the fatigue life of continuous fiber-reinforced composites can be ignored.

  1. Quantification of leachate discharged to groundwater using the water balance method and the hydrologic evaluation of landfill performance (HELP) model.

    PubMed

    Alslaibi, Tamer M; Abustan, Ismail; Mogheir, Yunes K; Afifi, Samir

    2013-01-01

    Landfills are a source of groundwater pollution in Gaza Strip. This study focused on Deir Al Balah landfill, which is a unique sanitary landfill site in Gaza Strip (i.e., it has a lining system and a leachate recirculation system). The objective of this article is to assess the generated leachate quantity and percolation to the groundwater aquifer at a specific site, using the approaches of (i) the hydrologic evaluation of landfill performance model (HELP) and (ii) the water balance method (WBM). The results show that when using the HELP model, the average volume of leachate discharged from Deir Al Balah landfill during the period 1997 to 2007 was around, 6800 m3/year. Meanwhile, the average volume of leachate percolated through the clay layer was 550 m3/year, which represents around 8% of the generated leachate. Meanwhile, the WBM indicated that the average volume of leachate discharged from Deir Al Balah landfill during the same period was around 7660 m3/year--about half of which comes from the moisture content of the waste, while the remainder comes from the infiltration of precipitation and re-circulated leachate. Therefore, the estimated quantity of leachate to groundwater by these two methods was very close. However, compared with the measured leachate quantity, these results were overestimated and indicated a dangerous threat to the groundwater aquifer, as there was no separation between municipal, hazardous and industrial wastes, in the area.

  2. Forecasting daily patient volumes in the emergency department.

    PubMed

    Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L

    2008-02-01

    Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.

  3. Effect of mannitol on globe and orbital volumes in humans.

    PubMed

    Weber, Adam C; Blandford, Alexander D; Costin, Bryan R; Perry, Julian D

    2018-03-01

    To determine the effect of intravenous mannitol on globe and orbital volumes. Retrospective chart review of a consecutive series of Cleveland Clinic Neurosurgical Intensive Care Unit patients who underwent computed tomographic imaging before and after intravenous mannitol administration. Volume measurements were performed according to a previously described technique by averaging axial image areas. Measurements before and after mannitol administration were compared using paired t-test. Fourteen patients (28 eyes) met inclusion criteria. Average globe volume decreased 186 mm 3 (-2.5%, p = 0.02) after mannitol administration, while average orbital volume increased 353 mm 3 (+3.5%, p = 0.04). Average globe volume change for subjects with follow-up scan less than 4.7 hours (mean 1.9 hours; range 0.2-4.5 hours) after mannitol administration was -125 mm 3 (-1.7%, p = 0.24) and average orbital volume change was +458 mm 3 (+5.1%, p = 0.11). Average globe volume change after mannitol administration for those with follow-up more than 4.7 hours (average 13.9 hours, range 4.9-24.7 hours) was -246 mm 3 (-3.3%, p = 0.05) and orbital volume change was +248 mm 3 (+2.2%, p = 0.24). Dividing the study population into groups based on mannitol dose did not yield any statistically significant change. Human globe volume decreases after intravenous mannitol administration, while orbital volume increases. These volume changes occur during the time period when intraocular pressure normalizes, after the pressure-lowering effects of the drug. This novel volumetric information improves our understanding of mannitol's mechanism of action and its effects on human ocular and periocular tissues.

  4. A recursively formulated first-order semianalytic artificial satellite theory based on the generalized method of averaging. Volume 1: The generalized method of averaging applied to the artificial satellite problem

    NASA Technical Reports Server (NTRS)

    Mcclain, W. D.

    1977-01-01

    A recursively formulated, first-order, semianalytic artificial satellite theory, based on the generalized method of averaging is presented in two volumes. Volume I comprehensively discusses the theory of the generalized method of averaging applied to the artificial satellite problem. Volume II presents the explicit development in the nonsingular equinoctial elements of the first-order average equations of motion. The recursive algorithms used to evaluate the first-order averaged equations of motion are also presented in Volume II. This semianalytic theory is, in principle, valid for a term of arbitrary degree in the expansion of the third-body disturbing function (nonresonant cases only) and for a term of arbitrary degree and order in the expansion of the nonspherical gravitational potential function.

  5. Order of accuracy of QUICK and related convection-diffusion schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    This report attempts to correct some misunderstandings that have appeared in the literature concerning the order of accuracy of the QUICK scheme for steady-state convective modeling. Other related convection-diffusion schemes are also considered. The original one-dimensional QUICK scheme written in terms of nodal-point values of the convected variable (with a 1/8-factor multiplying the 'curvature' term) is indeed a third-order representation of the finite volume formulation of the convection operator average across the control volume, written naturally in flux-difference form. An alternative single-point upwind difference scheme (SPUDS) using node values (with a 1/6-factor) is a third-order representation of the finite difference single-point formulation; this can be written in a pseudo-flux difference form. These are both third-order convection schemes; however, the QUICK finite volume convection operator is 33 percent more accurate than the single-point implementation of SPUDS. Another finite volume scheme, writing convective fluxes in terms of cell-average values, requires a 1/6-factor for third-order accuracy. For completeness, one can also write a single-point formulation of the convective derivative in terms of cell averages, and then express this in pseudo-flux difference form; for third-order accuracy, this requires a curvature factor of 5/24. Diffusion operators are also considered in both single-point and finite volume formulations. Finite volume formulations are found to be significantly more accurate. For example, classical second-order central differencing for the second derivative is exactly twice as accurate in a finite volume formulation as it is in single-point.

  6. LakeVOC; A Deterministic Model to Estimate Volatile Organic Compound Concentrations in Reservoirs and Lakes

    USGS Publications Warehouse

    Bender, David A.; Asher, William E.; Zogorski, John S.

    2003-01-01

    This report documents LakeVOC, a model to estimate volatile organic compound (VOC) concentrations in lakes and reservoirs. LakeVOC represents the lake or reservoir as a two-layer system and estimates VOC concentrations in both the epilimnion and hypolimnion. The air-water flux of a VOC is characterized in LakeVOC in terms of the two-film model of air-water exchange. LakeVOC solves the system of coupled differential equations for the VOC concentration in the epilimnion, the VOC concentration in the hypolimnion, the total mass of the VOC in the lake, the volume of the epilimnion, and the volume of the hypolimnion. A series of nine simulations were conducted to verify LakeVOC representation of mixing, dilution, and gas exchange characteristics in a hypothetical lake, and two additional estimates of lake volume and MTBE concentrations were done in an actual reservoir under environmental conditions. These 11 simulations showed that LakeVOC correctly handled mixing, dilution, and gas exchange. The model also adequately estimated VOC concentrations within the epilimnion in an actual reservoir with daily input parameters. As the parameter-input time scale increased (from daily to weekly to monthly, for example), the differences between the measured-averaged concentrations and the model-estimated concentrations generally increased, especially for the hypolimnion. This may be because as the time scale is increased from daily to weekly to monthly, the averaging of model inputs may cause a loss of detail in the model estimates.

  7. The effect of laparotomy and external fixator stabilization on pelvic volume in an unstable pelvic injury.

    PubMed

    Ghanayem, A J; Wilber, J H; Lieberman, J M; Motta, A O

    1995-03-01

    Determine if laparotomy further destabilizes an unstable pelvic injury and increases pelvic volume, and if reduction and stabilization restores pelvic volume and prevents volume changes secondary to laparotomy. Cadaveric pelvic fracture model. Unilateral open-book pelvic ring injuries were created in five fresh cadaveric specimens by directly disrupting the pubic symphysis, left sacroliac joint, and sacrospinous and sacrotuberous ligaments. Pelvic volume was determined using computerized axial tomography for the intact pelvis, disrupted pelvis with both a laparotomy incision opened and closed, and disrupted pelvis stabilized and reduced using an external fixator with the laparotomy incision opened. The average volume increase in the entire pelvis (from the top of the iliac crests to the bottom of the ischial tuberosities) between a nonstabilized injury with the abdomen closed and then subsequently opened was 15 +/- 5% (423 cc). The average increase in entire pelvic volume between a stabilized and reduced pelvis and nonstabilized pelvis, both with the abdomen open, was 26 +/- 5% (692 cc). The public diastasis increased from 3.9 to 9.3 cm in a nonstabilized pelvis with the abdomen closed and then subsequently opened. Application of a single-pin anterior-frame external fixator reduced the pubic diastasis anatomically and reduced the average entire and true (from the pelvic brim to the ischeal tuberosities) pelvic volumes to within 3 +/- 4 and 8 +/- 6% of the initial volume, respectively. We believe that the abdominal wall provides stability to an unstable pelvic ring injury via a tension band effect on the iliac wings. Our results demonstrate that a laparotomy further destabilized an open-book pelvic injury and subsequently increased pelvic volume and pubic diastasis. This could potentially increase blood loss from the pelvic injury and delay the tamponade effect of reduction and stabilization. A single-pin external fixator prevents the destabilizing effect of the laparotomy and effectively reduces pelvic volume. These data support reduction and temporary stabilization of unstable pelvic injuries before or concomitantly with laparotomy.

  8. Phase averaging method for the modeling of the multiprobe and cutaneous cryosurgery

    NASA Astrophysics Data System (ADS)

    E Shilnikov, K.; Kudryashov, N. A.; Y Gaiur, I.

    2017-12-01

    In this paper we consider the problem of planning and optimization of the cutaneous and multiprobe cryosurgery operations. An explicit scheme based on the finite volume approximation of phase averaged Pennes bioheat transfer model is applied. The flux relaxation method is used for the stability improvement of scheme. Skin tissue is considered as strongly inhomogeneous media. Computerized planning tool is tested on model cryotip-based and cutaneous cryosurgery problems. For the case of cutaneous cryosurgery the method of an additional freezing element mounting is studied as an approach to optimize the cellular necrosis front propagation.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boutilier, Justin J., E-mail: j.boutilier@mail.utoronto.ca; Lee, Taewoo; Craig, Tim

    Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and appliedmore » three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR weight prediction methodologies perform comparably to the LR model and can produce clinical quality treatment plans by simultaneously predicting multiple weights that capture trade-offs associated with sparing multiple OARs.« less

  10. Simulation of streamflow and water quality in the Leon Creek watershed, Bexar County, Texas, 1997-2004

    USGS Publications Warehouse

    Ockerman, Darwin J.; Roussel, Meghan C.

    2009-01-01

    The U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers and the San Antonio River Authority, configured, calibrated, and tested a Hydrological Simulation Program ? FORTRAN watershed model for the approximately 238-square-mile Leon Creek watershed in Bexar County, Texas, and used the model to simulate streamflow and water quality (focusing on loads and yields of selected constituents). Streamflow in the model was calibrated and tested with available data from five U.S. Geological Survey streamflow-gaging stations for 1997-2004. Simulated streamflow volumes closely matched measured streamflow volumes at all streamflow-gaging stations. Total simulated streamflow volumes were within 10 percent of measured values. Streamflow volumes are greatly influenced by large storms. Two months that included major floods accounted for about 50 percent of all the streamflow measured at the most downstream gaging station during 1997-2004. Water-quality properties and constituents (water temperature, dissolved oxygen, suspended sediment, dissolved ammonia nitrogen, dissolved nitrate nitrogen, and dissolved and total lead and zinc) in the model were calibrated using available data from 13 sites in and near the Leon Creek watershed for varying periods of record during 1992-2005. Average simulated daily mean water temperature and dissolved oxygen at the most downstream gaging station during 1997-2000 were within 1 percent of average measured daily mean water temperature and dissolved oxygen. Simulated suspended-sediment load at the most downstream gaging station during 2001-04 (excluding July 2002 because of major storms) was 77,700 tons compared with 74,600 tons estimated from a streamflow-load regression relation (coefficient of determination = .869). Simulated concentrations of dissolved ammonia nitrogen and dissolved nitrate nitrogen closely matched measured concentrations after calibration. At the most downstream gaging station, average simulated monthly mean concentrations of dissolved ammonia and nitrate concentrations during 1997-2004 were 0.03 and 0.37 milligram per liter, respectively. For the most downstream station, the measured and simulated concentrations of dissolved and total lead and zinc for stormflows during 1993-97 after calibration do not match particularly closely. For base-flow conditions during 1997-2004 at the most downstream station, the simulated/measured match is better. For example, median simulated concentration of total lead (for 2,041 days) was 0.96 microgram per liter, and median measured concentration (for nine samples) of total lead was 1.0 microgram per liter. To demonstrate an application of the Leon Creek watershed model, streamflow constituent loads and yields for suspended sediment, dissolved nitrate nitrogen, and total lead were simulated at the mouth of Leon Creek (outlet of the watershed) for 1997-2004. The average suspended-sediment load was 51,800 tons per year. The average suspended-sediment yield was 0.34 ton per acre per year. The average load of dissolved nitrate at the outlet of the watershed was 802 tons per year. The corresponding yield was 10.5 pounds per acre per year. The average load of lead at the outlet was 3,900 pounds per year. The average lead yield was 0.026 pound per acre per year. The degree to which available rainfall data represent actual rainfall is potentially the most serious source of measurement error associated with the Leon Creek model. Major storms contribute most of the streamflow loads for certain constituents. For example, the three largest stormflows contributed about 64 percent of the entire suspended-sediment load at the most downstream station during 1997-2004.

  11. SU-F-T-119: Development of Heart Prediction Model to Increase Accuracy of Dose Reconstruction for Radiotherapy Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, E; Choi, M; Lee, C

    Purpose: To assess individual variation in heart volume and location in order to develop a prediction model of the heart. This heart prediction model will be used to calculate individualized heart doses for radiotherapy patients in epidemiological studies. Methods: Chest CT images for 30 adult male and 30 adult female patients were obtained from NIH Clinical Center. Image-analysis computer programs were used to segment the whole heart and 8 sub-regions and to measure the volume of each sub- region and the dimension of the whole heart. An analytical dosimetry method was used for the 30 adult female patients to estimatemore » mean heart dose during conventional left breast radiotherapy. Results: The average volumes of the whole heart were 803.37 cm{sup 3} (COV 18.8%) and 570.19 cm{sup 3} (COV 18.8%) for adult male and female patients, respectively, which are comparable with the international reference volumes of 807.69 cm{sup 3} for males and 596.15 cm{sup 3} for females. Some patient characteristics were strongly correlated (R{sup 2}>0.5) with heart volume and heart dimensions (e.g., Body Mass Index vs. heart depth in males: R{sup 2}=0.54; weight vs. heart width in the adult females: R{sup 2}=0.63). We found that the mean heart dose 3.805 Gy (assuming prescribed dose of 50 Gy) in the breast radiotherapy simulations of the 30 adult females could be an underestimate (up to 1.6-fold) or overestimate (up to 1.8-fold) of the patient-specific heart dose. Conclusion: The study showed the significant variation in patient heart volumes and dimensions, resulting in substantial dose errors when a single average heart model is used for retrospective dose reconstruction. We are completing a multivariate analysis to develop a prediction model of the heart. This model will increase accuracy in dose reconstruction for radiotherapy patients and allow us to individualize heart dose calculations for patients whose CT images are not available.« less

  12. Analysis and calculation of macrosegregation in a casting ingot. MPS solidification model. Volume 3: Operating manual

    NASA Technical Reports Server (NTRS)

    Maples, A. L.

    1980-01-01

    The operation of solidification model 1 is described. Model 1 calculates the macrosegregation in a rectangular ingot of a binary alloy as a result of horizontal axisymmetric bidirectional solidification. The calculation is restricted to steady-state solidification; there is no variation in final local average composition in the direction of isotherm movement. The physics of the model are given.

  13. Optical characterization of multi-scale morphologically complex heterogeneous media - Application to snow with soot impurities

    NASA Astrophysics Data System (ADS)

    Dai, Xiaoyu; Haussener, Sophia

    2018-02-01

    A multi-scale methodology for the radiative transfer analysis of heterogeneous media composed of morphologically-complex components on two distinct scales is presented. The methodology incorporates the exact morphology at the various scales and utilizes volume-averaging approaches with the corresponding effective properties to couple the scales. At the continuum level, the volume-averaged coupled radiative transfer equations are solved utilizing (i) effective radiative transport properties obtained by direct Monte Carlo simulations at the pore level, and (ii) averaged bulk material properties obtained at particle level by Lorenz-Mie theory or discrete dipole approximation calculations. This model is applied to a soot-contaminated snow layer, and is experimentally validated with reflectance measurements of such layers. A quantitative and decoupled understanding of the morphological effect on the radiative transport is achieved, and a significant influence of the dual-scale morphology on the macroscopic optical behavior is observed. Our results show that with a small amount of soot particles, of the order of 1ppb in volume fraction, the reduction in reflectance of a snow layer with large ice grains can reach up to 77% (at a wavelength of 0.3 μm). Soot impurities modeled as compact agglomerates yield 2-3% lower reduction of the reflectance in a thick show layer compared to snow with soot impurities modeled as chain-like agglomerates. Soot impurities modeled as equivalent spherical particles underestimate the reflectance reduction by 2-8%. This study implies that the morphology of the heterogeneities in a media significantly affects the macroscopic optical behavior and, specifically for the soot-contaminated snow, indicates the non-negligible role of soot on the absorption behavior of snow layers. It can be equally used in technical applications for the assessment and optimization of optical performance in multi-scale media.

  14. SToRM: A Model for Unsteady Surface Hydraulics Over Complex Terrain

    USGS Publications Warehouse

    Simoes, Francisco J.

    2014-01-01

    A two-dimensional (depth-averaged) finite volume Godunov-type shallow water model developed for flow over complex topography is presented. The model is based on an unstructured cellcentered finite volume formulation and a nonlinear strong stability preserving Runge-Kutta time stepping scheme. The numerical discretization is founded on the classical and well established shallow water equations in hyperbolic conservative form, but the convective fluxes are calculated using auto-switching Riemann and diffusive numerical fluxes. The model’s implementation within a graphical user interface is discussed. Field application of the model is illustrated by utilizing it to estimate peak flow discharges in a flooding event of historic significance in Colorado, U.S.A., in 2013.

  15. A simple model to predict the biodiesel blend density as simultaneous function of blend percent and temperature.

    PubMed

    Gaonkar, Narayan; Vaidya, R G

    2016-05-01

    A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.

  16. Histologic Evaluation of Micronized AlloDerm After Injection Laryngoplasty in a Rabbit Model.

    PubMed

    Oldenburg, Michael S; Janus, Jeff; Voss, Steve; San Marina, Serban; Chen, Tiffany; Garcia, Joaquin; Ekbom, Dale

    2017-05-01

    Micronized AlloDerm is a commonly used injectable material for injection laryngoplasty; however, the histologic response to laryngeal implantation and resorption rate over time have not been elucidated. This study aimed to evaluate the in vivo response of micronized AlloDerm over time after laryngeal implantation using a rabbit model. Animal model. The left recurrent laryngeal nerve was sectioned in five New Zealand White rabbits to create a vocal cord paralysis. Two weeks later, injection laryngoplasty was performed with 100 μL of micronized AlloDerm. Animals were sacrificed 4 (two rabbits) and 12 (three rabbits) weeks after injection. Histologic sections were stained and evaluated by a single pathologist. Volume estimates were made by assuming the implant took an ellipsoid shape using dimensions calculated from histologic slides. In all cases, histological analysis revealed a lymphocytic inflammatory response infiltrating the peripheral margins of injection. After 4 weeks, the volume of injected material remaining in two rabbits was 404 and 278 mm 3 (average 341 mm 3 ). After 12 weeks, the volume of injected material remaining in three rabbits was 0, 61, and 124 mm 3 (average 62 mm 3 ), an 82% difference in volume of material between animals sacrificed at 4 weeks versus 12 weeks. Injection laryngoplasty using micronized AlloDerm induces a lymphocytic inflammatory response after injection in a rabbit model. Though a significant amount of material remains after 4 weeks, by 12 weeks the majority has been reabsorbed. NA Laryngoscope, 127:E166-E169, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  17. General slip regime permeability model for gas flow through porous media

    NASA Astrophysics Data System (ADS)

    Zhou, Bo; Jiang, Peixue; Xu, Ruina; Ouyang, Xiaolong

    2016-07-01

    A theoretical effective gas permeability model was developed for rarefied gas flow in porous media, which holds over the entire slip regime with the permeability derived as a function of the Knudsen number. This general slip regime model (GSR model) is derived from the pore-scale Navier-Stokes equations subject to the first-order wall slip boundary condition using the volume-averaging method. The local closure problem for the volume-averaged equations is studied analytically and numerically using a periodic sphere array geometry. The GSR model includes a rational fraction function of the Knudsen number which leads to a limit effective permeability as the Knudsen number increases. The mechanism for this behavior is the viscous fluid inner friction caused by converging-diverging flow channels in porous media. A linearization of the GSR model leads to the Klinkenberg equation for slightly rarefied gas flows. Finite element simulations show that the Klinkenberg model overestimates the effective permeability by as much as 33% when a flow approaches the transition regime. The GSR model reduces to the unified permeability model [F. Civan, "Effective correlation of apparent gas permeability in tight porous media," Transp. Porous Media 82, 375 (2010)] for the flow in the slip regime and clarifies the physical significance of the empirical parameter b in the unified model.

  18. Average volume of alcohol consumption and all-cause mortality in African Americans: the NHEFS cohort.

    PubMed

    Sempos, Christopher T; Rehm, Jürgen; Wu, Tiejian; Crespo, Carlos J; Trevisan, Maurizio

    2003-01-01

    To analyze the relationship between average volume of alcohol consumption and all-cause mortality in African Americans. Prospective cohort study--the NHANES Epidemiologic Follow-Up Study (NHEFS)--with baseline data collected 1971 through 1975 as part of the first National Health and Nutrition Examination Survey (NHANES I) and follow-up through 1992. The analytic data set consisted of 2054 African American men (n = 768) and women (n = 1,286), 25 to 75 years of age, who were followed for approximately 19 years. Alcohol was measured with a quantity-frequency measure at baseline. All-cause mortality. No J-shaped curve was found in the relationship between average volume of alcohol consumption and mortality for male or female African Americans. Instead, no beneficial effect appeared and mortality increased with increasing average consumption for more than one drink a day. The reason for not finding the J-shape in African Americans may be the result of the more detrimental drinking patterns in this ethnicity and consequently the lack of protective effects of alcohol on coronary heart disease. Taking into account sampling design did not substantially change the results from the models, which assumed a simple random sample. If this result can be confirmed in other samples, alcohol policy, especially prevention, should better incorporate patterns of drinking into programs.

  19. Tumor-volume simulation during radiotherapy for head-and-neck cancer using a four-level cell population model.

    PubMed

    Chvetsov, Alexei V; Dong, Lei; Palta, Jantinder R; Amdur, Robert J

    2009-10-01

    To develop a fast computational radiobiologic model for quantitative analysis of tumor volume during fractionated radiotherapy. The tumor-volume model can be useful for optimizing image-guidance protocols and four-dimensional treatment simulations in proton therapy that is highly sensitive to physiologic changes. The analysis is performed using two approximations: (1) tumor volume is a linear function of total cell number and (2) tumor-cell population is separated into four subpopulations: oxygenated viable cells, oxygenated lethally damaged cells, hypoxic viable cells, and hypoxic lethally damaged cells. An exponential decay model is used for disintegration and removal of oxygenated lethally damaged cells from the tumor. We tested our model on daily volumetric imaging data available for 14 head-and-neck cancer patients treated with an integrated computed tomography/linear accelerator system. A simulation based on the averaged values of radiobiologic parameters was able to describe eight cases during the entire treatment and four cases partially (50% of treatment time) with a maximum 20% error. The largest discrepancies between the model and clinical data were obtained for small tumors, which may be explained by larger errors in the manual tumor volume delineation procedure. Our results indicate that the change in gross tumor volume for head-and-neck cancer can be adequately described by a relatively simple radiobiologic model. In future research, we propose to study the variation of model parameters by fitting to clinical data for a cohort of patients with head-and-neck cancer and other tumors. The potential impact of other processes, like concurrent chemotherapy, on tumor volume should be evaluated.

  20. Assessment of circulation and inter-basin transport in the Salish Sea including Johnstone Strait and Discovery Islands pathways

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khangaonkar, Tarang; Long, Wen; Xu, Wenwei

    The Salish Sea consisting of Puget Sound and Georgia Basin in U.S and Canadian waters has been the subject of several independent data collection and modeling studies. However, these interconnected basins and their hydrodynamic interactions have not received attention as a contiguous unit. The Strait of Juan de Fuca is the primary pathway through which Pacific Ocean water enters the Salish Sea but the role played by Johnstone Strait and the complex channels northeast of Vancouver Island, connecting the Salish Sea and the Pacific Ocean, on overall Salish Sea circulation has not been characterized. In this paper we present amore » modeling-based assessment of the two-layer circulation and transport through the multiple interconnected sub-basins within the Salish Sea including the effect of exchange via Johnstone Strait and Discovery Islands. The Salish Sea Model previously developed using the finite volume community ocean model (FVCOM) was expanded over the continental shelf for this assessment encircling Vancouver Island, including Discovery Islands, Johnstone Strait, Broughton Archipelago and the associated waterways. A computational technique was developed to allow summation of volume fluxes across arbitrary transects through unstructured finite volume cells. Tidally averaged volume fluxes were computed at multiple transects. The results were used to validate the classic model of Circulation in Embracing Sills for Puget Sound and to provide quantitative estimates of the lateral distribution of tidally averaged transport through the system. Sensitivity tests with and without exchanges through Johnstone Strait demonstrate that it is a pathway for Georgia Basin runoff and Fraser River water to exit the Salish Sea and for Pacific Ocean inflow. However the relative impact of this exchange on circulation and flushing in Puget Sound Basin is small.« less

  1. Simulating the water budget of a Prairie Potholes complex from LiDAR and hydrological models in North Dakota, USA

    USGS Publications Warehouse

    Huang, Shengli; Young, Claudia; Abdul-Aziz, Omar I.; Dahal, Devendra; Feng, Min; Liu, Shuguang

    2013-01-01

    Hydrological processes of the wetland complex in the Prairie Pothole Region (PPR) are difficult to model, partly due to a lack of wetland morphology data. We used Light Detection And Ranging (LiDAR) data sets to derive wetland features; we then modelled rainfall, snowfall, snowmelt, runoff, evaporation, the “fill-and-spill” mechanism, shallow groundwater loss, and the effect of wet and dry conditions. For large wetlands with a volume greater than thousands of cubic metres (e.g. about 3000 m3), the modelled water volume agreed fairly well with observations; however, it did not succeed for small wetlands (e.g. volume less than 450 m3). Despite the failure for small wetlands, the modelled water area of the wetland complex coincided well with interpretation of aerial photographs, showing a linear regression with R2 of around 0.80 and a mean average error of around 0.55 km2. The next step is to improve the water budget modelling for small wetlands.

  2. Random polycrystals of grains containing cracks: Model ofquasistatic elastic behavior for fractured systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berryman, James G.; Grechka, Vladimir

    2006-07-08

    A model study on fractured systems was performed using aconcept that treats isotropic cracked systems as ensembles of crackedgrains by analogy to isotropic polycrystalline elastic media. Theapproach has two advantages: (a) Averaging performed is ensembleaveraging, thus avoiding the criticism legitimately leveled at mosteffective medium theories of quasistatic elastic behavior for crackedmedia based on volume concentrations of inclusions. Since crack effectsare largely independent of the volume they occupy in the composite, sucha non-volume-based method offers an appealingly simple modelingalternative. (b) The second advantage is that both polycrystals andfractured media are stiffer than might otherwise be expected, due tonatural bridging effects ofmore » the strong components. These same effectshave also often been interpreted as crack-crack screening inhigh-crack-density fractured media, but there is no inherent conflictbetween these two interpretations of this phenomenon. Results of thestudy are somewhat mixed. The spread in elastic constants observed in aset of numerical experiments is found to be very comparable to the spreadin values contained between the Reuss and Voigt bounds for thepolycrystal model. However, computed Hashin-Shtrikman bounds are much tootight to be in agreement with the numerical data, showing thatpolycrystals of cracked grains tend to violate some implicit assumptionsof the Hashin-Shtrikman bounding approach. However, the self-consistentestimates obtained for the random polycrystal model are nevertheless verygood estimators of the observed average behavior.« less

  3. Interpretation of TEPC Measurements in Space Flights for Radiation Monitoring

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Nikjoo, Hooshang; Dicello, John F.; Pisacane, Vincent; Cucinotta, Francis A.

    2007-01-01

    For the proper interpretation of radiation data measured in space, the results of integrated radiation transport models were compared with the tissue equivalent proportional counter (TEPC) measurements. TEPC is a simple, time-dependent approach to radiation monitoring for astronauts on board the International Space Station. Another and a newer approach to microdosimetry is the use of silicon-on-insulator (SOI) technology launched on the MidSTAR-1 mission in low Earth orbit (LEO). In the radiation protection practice, the average quality factor of a radiation field is defined as a function of linear energy transfer (LET), Qave(LET). However, TEPC measures the average quality factor as a function of the lineal energy y, Qave(y), defined as the average energy deposition in a volume divided by the average chord length of the volume. The deviation of y from LET is caused by energy straggling, delta-ray escape or entry, and nuclear fragments produced in the detector volume. The response distribution functions of the wall-less and walled TEPCs were calculated from Monte-Carlo track simulations. Using an integrated space radiation model (which includes the transport codes HZETRN and BRYNTRN, and the quantum nuclear interaction model QMSFRG) and the resultant response distribution functions from Monte-Carlo track simulations, we compared model calculations with the walled-TEPC measurements from NASA missions in LEO and made predictions for the lunar and the Mars missions. Good agreement was found for Qave(y) between the model and measured spectra from past NASA missions. The Qave(y) values for the trapped or the solar protons ranged from 1.9-2.5. This over-estimates the Qave(LET) values which ranged from 1.4-1.6. Both quantities increase with shield thickness due to nuclear fragmentation. The Qave(LET) for the complete GCR spectra was found to be 3.5-4.5, while flight TEPCs measured 2.9-3.4 for Qave(y). The GCR values are decreasing with the shield thickness. Our analysis of the measurements of TEPCs can be used for a proper interpretation of observed data of monitoring the space radiation environment.

  4. SU-F-R-44: Modeling Lung SBRT Tumor Response Using Bayesian Network Averaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diamant, A; Ybarra, N; Seuntjens, J

    2016-06-15

    Purpose: The prediction of tumor control after a patient receives lung SBRT (stereotactic body radiation therapy) has proven to be challenging, due to the complex interactions between an individual’s biology and dose-volume metrics. Many of these variables have predictive power when combined, a feature that we exploit using a graph modeling approach based on Bayesian networks. This provides a probabilistic framework that allows for accurate and visually intuitive predictive modeling. The aim of this study is to uncover possible interactions between an individual patient’s characteristics and generate a robust model capable of predicting said patient’s treatment outcome. Methods: We investigatedmore » a cohort of 32 prospective patients from multiple institutions whom had received curative SBRT to the lung. The number of patients exhibiting tumor failure was observed to be 7 (event rate of 22%). The serum concentration of 5 biomarkers previously associated with NSCLC (non-small cell lung cancer) was measured pre-treatment. A total of 21 variables were analyzed including: dose-volume metrics with BED (biologically effective dose) correction and clinical variables. A Markov Chain Monte Carlo technique estimated the posterior probability distribution of the potential graphical structures. The probability of tumor failure was then estimated by averaging the top 100 graphs and applying Baye’s rule. Results: The optimal Bayesian model generated throughout this study incorporated the PTV volume, the serum concentration of the biomarker EGFR (epidermal growth factor receptor) and prescription BED. This predictive model recorded an area under the receiver operating characteristic curve of 0.94(1), providing better performance compared to competing methods in other literature. Conclusion: The use of biomarkers in conjunction with dose-volume metrics allows for the generation of a robust predictive model. The preliminary results of this report demonstrate that it is possible to accurately model the prognosis of an individual lung SBRT patient’s treatment.« less

  5. Impact of aircraft emissions on air quality in the vicinity of airports. Volume II. An updated model assessment of aircraft generated air pollution at LAX, JFK, and ORD. Final report Jan 1978-Jul 1980

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamartino, R.J.; Smith, D.G.; Bremer, S.A.

    1980-07-01

    This report documents the results of the Federal Aviation Administration (FAA)/Environmental Protection Agency (EPA) air quality study which has been conducted to assess the impact of aircraft emissions of carbon monoxide (CO), hydrocarbons (HC), and oxides of nitrogen (NOx) in the vicinity of airports. This assessment includes the results of recent modeling and monitoring efforts at Washington National (DCA), Los Angeles International (LAX), Dulles International (IAD), and Lakeland, Florida airports and an updated modeling of aircraft generated pollution at LAX, John F. Kennedy (JFK) and Chicago O'Hare (ORD) airports. The Airport Vicinity Air Pollution (AVAP) model which was designed formore » use at civil airports was used in this assessment. In addition the results of the application of the military version of the AVAP model the Air Quality Assessment Model (AQAM), are summarized. Both the results of the pollution monitoring analyses in Volume I and the modeling studies in Volume II suggest that: maximum hourly average CO concentrations from aircraft are unlikely to exceed 5 parts per million (ppm) in areas of public exposure and are thus small in comparison to the National Ambient Air Quality Standard of 35 ppm; maximum hourly HC concentrations from aircraft can exceed 0.25 ppm over an area several times the size of the airport; and annual average NO2 concentrations from aircraft are estimated to contribute only 10 to 20 percent of the NAAQS limit level.« less

  6. Spatial feature analysis of a cosmic-ray sensor for measuring the soil water content: Comparison of four weighting methods

    NASA Astrophysics Data System (ADS)

    Cai, Jingya; Pang, Zhiguo; Fu, Jun'e.

    2018-04-01

    To quantitatively analyze the spatial features of a cosmic-ray sensor (CRS) (i.e., the measurement support volume of the CRS and the weight of the in situ point-scale soil water content (SWC) in terms of the regionally averaged SWC derived from the CRS) in measuring the SWC, cooperative observations based on CRS, oven drying and frequency domain reflectometry (FDR) methods are performed at the point and regional scales in a desert steppe area of the Inner Mongolia Autonomous Region. This region is flat with sparse vegetation cover consisting of only grass, thereby minimizing the effects of terrain and vegetation. Considering the two possibilities of the measurement support volume of the CRS, the results of four weighting methods are compared with the SWC monitored by FDR within an appropriate measurement support volume. The weighted average calculated using the neutron intensity-based weighting method (Ni weighting method) best fits the regionally averaged SWC measured by the CRS. Therefore, we conclude that the gyroscopic support volume and the weights determined by the Ni weighting method are the closest to the actual spatial features of the CRS when measuring the SWC. Based on these findings, a scale transformation model of the SWC from the point scale to the scale of the CRS measurement support volume is established. In addition, the spatial features simulated using the Ni weighting method are visualized by developing a software system.

  7. The relationship between limit of Dysphagia and average volume per swallow in patients with Parkinson's disease.

    PubMed

    Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes

    2014-08-01

    The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.

  8. Selected approaches to estimate water-budget components of the High Plains, 1940 through 1949 and 2000 through 2009

    USGS Publications Warehouse

    Stanton, Jennifer S.; Qi, Sharon L.; Ryter, Derek W.; Falk, Sarah E.; Houston, Natalie A.; Peterson, Steven M.; Westenbroek, Stephen M.; Christenson, Scott C.

    2011-01-01

    The High Plains aquifer, underlying almost 112 million acres in the central United States, is one of the largest aquifers in the Nation. It is the primary water supply for drinking water, irrigation, animal production, and industry in the region. Expansion of irrigated agriculture throughout the past 60 years has helped make the High Plains one of the most productive agricultural regions in the Nation. Extensive withdrawals of groundwater for irrigation have caused water-level declines in many parts of the aquifer and increased concerns about the long-term sustainability of the aquifer. Quantification of water-budget components is a prerequisite for effective water-resources management. Components analyzed as part of this study were precipitation, evapotranspiration, recharge, surface runoff, groundwater discharge to streams, groundwater fluxes to and from adjacent geologic units, irrigation, and groundwater in storage. These components were assessed for 1940 through 1949 (representing conditions prior to substantial groundwater development and referred to as "pregroundwater development" throughout this report) and 2000 through 2009. Because no single method can perfectly quantify the magnitude of any part of a water budget at a regional scale, results from several methods and previously published work were compiled and compared for this study when feasible. Results varied among the several methods applied, as indicated by the range of average annual volumes given for each component listed in the following paragraphs. Precipitation was derived from three sources: the Parameter-Elevation Regressions on Independent Slopes Model, data developed using Next Generation Weather Radar and measured precipitation from weather stations by the Office of Hydrologic Development at the National Weather Service for the Sacramento-Soil Moisture Accounting model, and precipitation measured at weather stations and spatially distributed using an inverse-distance-weighted interpolation method. Precipitation estimates using these sources, as a 10-year average annual total volume for the High Plains, ranged from 192 to 199 million acre-feet (acre-ft) for 1940 through 1949 and from 185 to 199 million acre-ft for 2000 through 2009. Evapotranspiration was obtained from three sources: the National Weather Service Sacramento-Soil Moisture Accounting model, the Simplified-Surface-Energy-Balance model using remotely sensed data, and the Soil-Water-Balance model. Average annual total evapotranspiration estimated using these sources was 148 million acre-ft for 1940 through 1949 and ranged from 154 to 193 million acre-ft for 2000 through 2009. The maximum amount of shallow groundwater lost to evapotranspiration was approximated for areas where the water table was within 5 feet of land surface. The average annual total volume of evapotranspiration from shallow groundwater was 9.0 million acre-ft for 1940 through 1949 and ranged from 9.6 to 12.6 million acre-ft for 2000 through 2009. Recharge was estimated using two soil-water-balance models as well as previously published studies for various locations across the High Plains region. Average annual total recharge ranged from 8.3 to 13.2 million acre-ft for 1940 through 1949 and from 15.9 to 35.0 million acre-ft for 2000 through 2009. Surface runoff and groundwater discharge to streams were determined using discharge records from streamflow-gaging stations near the edges of the High Plains and the Base-Flow Index program. For 1940 through 1949, the average annual net surface runoff leaving the High Plains was 1.9 million acre-ft, and the net loss from the High Plains aquifer by groundwater discharge to streams was 3.1 million acre-ft. For 2000 through 2009, the average annual net surface runoff leaving the High Plains region was 1.3 million acre-ft and the net loss by groundwater discharge to streams was 3.9 million acre-ft. For 2000 through 2009, the average annual total estimated groundwater pumpage volume from two soil-water-balance models ranged from 8.7 to 16.2 million acre-ft. Average annual irrigation application rates for the High Plains ranged from 8.4 to 16.2 inches per year. The USGS Water-Use Program published estimated total annual pumpage from the High Plains aquifer for 2000 and 2005. Those volumes were greater than those estimated from the two soil-water-balance models. Total groundwater in storage in the High Plains aquifer was estimated as 3,173 million acre-ft prior to groundwater development and 2,907 million acre-ft in 2007. The average annual decrease of groundwater in storage between 2000 and 2007 was 10 million acre-ft per year.

  9. A consensus-based dynamics for market volumes

    NASA Astrophysics Data System (ADS)

    Sabatelli, Lorenzo; Richmond, Peter

    2004-12-01

    We develop a model of trading orders based on opinion dynamics. The agents may be thought as the share holders of a major mutual fund rather than as direct traders. The balance between their buy and sell orders determines the size of the fund order (volume) and has an impact on prices and indexes. We assume agents interact simultaneously to each other through a Sznajd-like interaction. Their degree of connection is determined by the probability of changing opinion independently of what their neighbours are doing. We assume that such a probability may change randomly, after each transaction, of an amount proportional to the relative difference between the volatility then measured and a benchmark that we assume to be an exponential moving average of the past volume values. We show how this simple model is compatible with some of the main statistical features observed for the asset volumes in financial markets.

  10. Computer-aided detection and quantification of endolymphatic hydrops within the mouse cochlea in vivo using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Liu, George S.; Kim, Jinkyung; Applegate, Brian E.; Oghalai, John S.

    2017-07-01

    Diseases that cause hearing loss and/or vertigo in humans such as Meniere's disease are often studied using animal models. The volume of endolymph within the inner ear varies with these diseases. Here, we used a mouse model of increased endolymph volume, endolymphatic hydrops, to develop a computer-aided objective approach to measure endolymph volume from images collected in vivo using optical coherence tomography. The displacement of Reissner's membrane from its normal position was measured in cochlear cross sections. We validated our computer-aided measurements with manual measurements and with trained observer labels. This approach allows for computer-aided detection of endolymphatic hydrops in mice, with test performance showing sensitivity of 91% and specificity of 87% using a running average of five measurements. These findings indicate that this approach is accurate and reliable for classifying endolymphatic hydrops and quantifying endolymph volume.

  11. FIELD PERFORMANCE OF ADVANCED TECHNOLOGY WOODSTOVES IN GLEN FALLS, NY, 1988-89 - VOLUME I

    EPA Science Inventory

    The report gives results of an evaluation of particulate emission trends for three models of catalytic and two models of non- catalytic woodstoves under in-home burning conditions during the 1988-89 heating season in Glens Falls, NY. he results (averaging 9.4 g/h and 9.4 g/kg) sh...

  12. FIELD PERFORMANCE OF ADVANCED TECHNOLOGY WOODSTOVES IN GLEN FALLS, NY, 1988-89 - VOLUME II - TECHNICAL APPENDICES

    EPA Science Inventory

    The report gives results of an evaluation of particulate emission trends for three models of catalytic and two models of non- catalytic woodstoves under "in-home" burning conditions during the 1988-89 heating season in Glens Falls, NY. The results (averaging 9.4 g/h and 9.4 g/kg...

  13. A discrete model of Ostwald ripening based on multiple pairwise interactions

    NASA Astrophysics Data System (ADS)

    Di Nunzio, Paolo Emilio

    2018-06-01

    A discrete multi-particle model of Ostwald ripening based on direct pairwise interactions is developed for particles with incoherent interfaces as an alternative to the classical LSW mean field theory. The rate of matter exchange depends on the average surface-to-surface interparticle distance, a characteristic feature of the system which naturally incorporates the effect of volume fraction of second phase. The multi-particle diffusion is described through the definition of an interaction volume containing all the particles involved in the exchange of solute. At small volume fractions this is proportional to the size of the central particle, at higher volume fractions it gradually reduces as a consequence of diffusion screening described on a geometrical basis. The topological noise present in real systems is also included. For volume fractions below about 0.1 the model predicts broad and right-skewed stationary size distributions resembling a lognormal function. Above this value, a transition to sharper, more symmetrical but still right-skewed shapes occurs. An excellent agreement with experiments is obtained for 3D particle size distributions of solid-solid and solid-liquid systems with volume fraction 0.07, 0.30, 0.52 and 0.74. The kinetic constant of the model depends on the cube root of volume fraction up to about 0.1, then increases rapidly with an upward concavity. It is in good agreement with the available literature data on solid-liquid mixtures in the volume fraction range from 0.20 to about 0.75.

  14. Effects of Turbulence Model on Prediction of Hot-Gas Lateral Jet Interaction in a Supersonic Crossflow

    DTIC Science & Technology

    2015-07-01

    performance computing time from the US Department of Defense (DOD) High Performance Computing Modernization program at the US Army Research Laboratory...Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time ...dimensional, compressible, Reynolds-averaged Navier-Stokes (RANS) equations are solved using a finite volume method. A point-implicit time - integration

  15. Quantification of gastric emptying and duodenogastric reflux stroke volumes using three-dimensional guided digital color Doppler imaging.

    PubMed

    Hausken, T; Li, X N; Goldman, B; Leotta, D; Ødegaard, S; Martin, R W

    2001-07-01

    To develop a non-invasive method for evaluating gastric emptying and duodenogastric reflux stroke volumes using three-dimensional (3D) guided digital color Doppler imaging. The technique involved color Doppler digital images of transpyloric flow in which the 3D position and orientation of the images were known by using a magnetic location system. In vitro, the system was found to slightly underestimate the reference flow (by average 8.8%). In vivo (five volunteers), stroke volume of gastric emptying episodes lasted on average only 0.69 s with a volume on average of 4.3 ml (range 1.1-7.4 ml), and duodenogastric reflux episodes on average 1.4 s with a volume of 8.3 ml (range 1.3-14.1 ml). With the appropriate instrument settings, orientation determined color Doppler can be used for stroke volume quantification of gastric emptying and duodenogastric reflux episodes.

  16. Numerical Simulations of Homogeneous Turbulence Using Lagrangian-Averaged Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Mohseni, Kamran; Shkoller, Steve; Kosovic, Branko; Marsden, Jerrold E.; Carati, Daniele; Wray, Alan; Rogallo, Robert

    2000-01-01

    The Lagrangian-averaged Navier-Stokes (LANS) equations are numerically evaluated as a turbulence closure. They are derived from a novel Lagrangian averaging procedure on the space of all volume-preserving maps and can be viewed as a numerical algorithm which removes the energy content from the small scales (smaller than some a priori fixed spatial scale alpha) using a dispersive rather than dissipative mechanism, thus maintaining the crucial features of the large scale flow. We examine the modeling capabilities of the LANS equations for decaying homogeneous turbulence, ascertain their ability to track the energy spectrum of fully resolved direct numerical simulations (DNS), compare the relative energy decay rates, and compare LANS with well-accepted large eddy simulation (LES) models.

  17. Modelling Fluctuations in the Concentration of Neutrally Buoyant Substances in the Atmosphere.

    NASA Astrophysics Data System (ADS)

    Ride, David John

    1987-09-01

    Available from UMI in association with The British Library. This thesis sets out to model the probability density function (pdf) of the perceived concentration of a contaminant in the atmosphere using simple, physical representations of the dispersing contaminant. Sensors of differing types perceive a given concentration field in different ways; the chosen pdf must be able to describe all possible perceptions of the same field. Herein, sensors are characterised by the time taken to achieve a reading and by a threshold level of concentration below which the sensor does not respond and thus records a concentration of zero. A literature survey of theoretical and experimental work concerning concentration fluctuations is conducted, and the merits--or otherwise--of some standard pdfs in common use are discussed. The ways in which the central moments, the peak-to-mean ratio, the intermittency and the autocorrelation function behave under various combinations of threshold levels and time averaging are investigated. An original experiment designed to test the suitability of time averaging as a valid simulation of both sensor response times and sampling volumes is reported. The results suggest that, for practical purposes, smoothing from combined volume/time characteristics of a sensor can be modelled by time averaging the output of a more responsive sensor. A possible non -linear volume/time effect was observed at very high temporal resolutions. Intermittency is shown to be an important parameter of the concentration field. A geometric model for describing and explaining the intermittency of a meandering plume of material in terms of the ratio of the plume width to the amplitude of meander and the within-plume intermittency is developed and validated. It shows that the model cross plume profiles of intermittency cannot, in general, be represented by simple functional forms. A new physical model for the fluctuations in concentration from a dispersing contaminant is described which leads to the adoption of a truncated Gaussian (or 'clipped normal') pdf for time averaged concentrations. A series of experiments is described which was designed to test the aptness of this distribution and display changes in the perception of the parameters of the concentration field wrought by various combinations of thresholding and time averaging. The truncated Gaussian pdf is shown to be more suitable for describing fluctuations than the log-normal and negative exponential pdfs, and to possess a better physical basis than either of them. The combination of thresholding and time averaging on the output of a sensor is shown to produce complex results which could affect profoundly the assessment of the potential hazard presented by a toxic, flammable or explosive plume or cloud.

  18. Estimating Marine Aerosol Particle Volume and Number from Maritime Aerosol Network Data

    NASA Technical Reports Server (NTRS)

    Sayer, A. M.; Smirnov, A.; Hsu, N. C.; Munchak, L. A.; Holben, B. N.

    2012-01-01

    As well as spectral aerosol optical depth (AOD), aerosol composition and concentration (number, volume, or mass) are of interest for a variety of applications. However, remote sensing of these quantities is more difficult than for AOD, as it is more sensitive to assumptions relating to aerosol composition. This study uses spectral AOD measured on Maritime Aerosol Network (MAN) cruises, with the additional constraint of a microphysical model for unpolluted maritime aerosol based on analysis of Aerosol Robotic Network (AERONET) inversions, to estimate these quantities over open ocean. When the MAN data are subset to those likely to be comprised of maritime aerosol, number and volume concentrations obtained are physically reasonable. Attempts to estimate surface concentration from columnar abundance, however, are shown to be limited by uncertainties in vertical distribution. Columnar AOD at 550 nm and aerosol number for unpolluted maritime cases are also compared with Moderate Resolution Imaging Spectroradiometer (MODIS) data, for both the present Collection 5.1 and forthcoming Collection 6. MODIS provides a best-fitting retrieval solution, as well as the average for several different solutions, with different aerosol microphysical models. The average solution MODIS dataset agrees more closely with MAN than the best solution dataset. Terra tends to retrieve lower aerosol number than MAN, and Aqua higher, linked with differences in the aerosol models commonly chosen. Collection 6 AOD is likely to agree more closely with MAN over open ocean than Collection 5.1. In situations where spectral AOD is measured accurately, and aerosol microphysical properties are reasonably well-constrained, estimates of aerosol number and volume using MAN or similar data would provide for a greater variety of potential comparisons with aerosol properties derived from satellite or chemistry transport model data.

  19. Modelling Wind Effects on Subtidal Salinity in Apalachicola Bay, Florida

    NASA Astrophysics Data System (ADS)

    Huang, W.; Jones, W. K.; Wu, T. S.

    2002-07-01

    Salinity is an important factor for oyster and estuarine productivity in Apalachicola Bay. Observations of salinity at oyster reefs have indicated a high correlation between subtidal salinity variations and the surface winds along the bay axis in an approximately east-west direction. In this paper, we applied a calibrated hydrodynamic model to examine the surface wind effects on the volume fluxes in the tidal inlets and the subtidal salinity variations in the bay. Model simulations show that, due to the large size of inlets located at the east and west ends of this long estuary, surface winds have significant effects on the volume fluxes in the estuary inlets for the water exchanges between the estuary and ocean. In general, eastward winds cause the inflow from the inlets at the western end and the outflow from inlets at the eastern end of the bay. Winds at 15 mph speed in the east-west direction can induce a 2000 m3 s-1 inflow of saline seawater into the bay from the inlets, a rate which is about 2·6 times that of the annual average freshwater inflow from the river. Due to the varied wind-induced volume fluxes in the inlets and the circulation in the bay, the time series of subtidal salinity at oyster reefs considerably increases during strong east-west wind conditions in comparison to salinity during windless conditions. In order to have a better understanding of the characteristics of the wind-induced subtidal circulation and salinity variations, the researchers also connected model simulations under constant east-west wind conditions. Results show that the volume fluxes are linearly proportional to the east-west wind stresses. Spatial distributions of daily average salinity and currents clearly show the significant effects of winds on the bay.

  20. Predictions of Poisson's ratio in cross-ply laminates containing matrix cracks and delaminations

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Allen, David H.; Nottorf, Eric W.

    1989-01-01

    A damage-dependent constitutive model for laminated composites has been developed for the combined damage modes of matrix cracks and delaminations. The model is based on the concept of continuum damage mechanics and uses second-order tensor valued internal state variables to represent each mode of damage. The internal state variables are defined as the local volume average of the relative crack face displacements. Since the local volume for delaminations is specified at the laminate level, the constitutive model takes the form of laminate analysis equations modified by the internal state variables. Model implementation is demonstrated for the laminate engineering modulus E(x) and Poisson's ratio nu(xy) of quasi-isotropic and cross-ply laminates. The model predictions are in close agreement to experimental results obtained for graphite/epoxy laminates.

  1. 40 CFR 1039.705 - How do I generate and calculate emission credits?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the following equation: Emission credits (kg) = (Std − FEL) × (Volume) × (AvgPR) × (UL) × (10−3) Where... family during the model year, as described in paragraph (c) of this section. AvgPR = the average maximum...

  2. 40 CFR 1039.705 - How do I generate and calculate emission credits?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the following equation: Emission credits (kg) = (Std − FEL) × (Volume) × (AvgPR) × (UL) × (10−3) Where... family during the model year, as described in paragraph (c) of this section. AvgPR = the average maximum...

  3. Wave Overtopping of a Barrier Beach

    NASA Astrophysics Data System (ADS)

    Thornton, E. B.; Laudier, N.; Macmahan, J. H.

    2009-12-01

    The rate of wave overtopping of a barrier beach is measured and modeled as a first step in modeling the breaching of a beach impounding an ephemeral river. Unique rate of wave overtopping data are obtained from the measure of the Carmel River, California, lagoon filling during a time when the lagoon is closed-off and there is no river inflow. Volume changes are calculated from measured lagoon height changes owing to wave overtopping by a stage-volume curve, then center differenced and averaged to provide volume rates of change in the lagoon. Wave height and period are obtained from CDIP MOPS directional wave spectra data in 15m fronting the beach. Beach morphology was measured by GPS walking surveys and interpolated for beach slopes and berm heights. Three empirical overtopping models by van der Meer and Janssen (1995), Hedges and Reis (1998) and Pullen et al. (2007) with differing parameterizations on wave height, period and beach slope and calibrated using extensive laboratory data obtained over plane, impermeable beaches are compared with the data. In addition, the run-up model by Stockdon et al. (2006) based on field data is examined. Three wave overtopping storm events are considered when morphology data were available less than 2 weeks prior to the event. The models are tuned to fit the data using a reduction factor to account for beach permeability, berm characteristics, non-normal wave incidence and surface roughness influence. It is concluded that the Stockdon et al. (2006) model underestimates run-up as no overtopping is predicted with this model. The three empirical overtopping models behaved similarly well with regression coefficients ranging 0.72 to 0.86 using a reasonable range of reduction factors 0.66 - 0.81 with an average of 0.74.

  4. Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning, volume 3

    NASA Technical Reports Server (NTRS)

    Keitz, J. F.

    1982-01-01

    The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 3 of the four major tasks included in the study. Task 3 compares flight plans developed on the Suitland forecast with actual data observed by the aircraft (and averaged over 10 degree segments). The results show that the average difference between the forecast and observed wind speed is 9 kts. without considering direction, and the average difference in the component of the forecast wind parallel to the direction of the observed wind is 13 kts. - both indicating that the Suitland forecast underestimates the wind speeds. The Root Mean Square (RMS) vector error is 30.1 kts. The average absolute difference in direction between the forecast and observed wind is 26 degrees and the temperature difference is 3 degree Centigrade. These results indicate that the forecast model as well as the verifying analysis used to develop comparison flight plans in Tasks 1 and 2 is a limiting factor and that the average potential fuel savings or penalty are up to 3.6 percent depending on the direction of flight.

  5. Equivalent Electromagnetic Constants for Microwave Application to Composite Materials for the Multi-Scale Problem

    PubMed Central

    Fujisaki, Keisuke; Ikeda, Tomoyuki

    2013-01-01

    To connect different scale models in the multi-scale problem of microwave use, equivalent material constants were researched numerically by a three-dimensional electromagnetic field, taking into account eddy current and displacement current. A volume averaged method and a standing wave method were used to introduce the equivalent material constants; water particles and aluminum particles are used as composite materials. Consumed electrical power is used for the evaluation. Water particles have the same equivalent material constants for both methods; the same electrical power is obtained for both the precise model (micro-model) and the homogeneous model (macro-model). However, aluminum particles have dissimilar equivalent material constants for both methods; different electric power is obtained for both models. The varying electromagnetic phenomena are derived from the expression of eddy current. For small electrical conductivity such as water, the macro-current which flows in the macro-model and the micro-current which flows in the micro-model express the same electromagnetic phenomena. However, for large electrical conductivity such as aluminum, the macro-current and micro-current express different electromagnetic phenomena. The eddy current which is observed in the micro-model is not expressed by the macro-model. Therefore, the equivalent material constant derived from the volume averaged method and the standing wave method is applicable to water with a small electrical conductivity, although not applicable to aluminum with a large electrical conductivity. PMID:28788395

  6. Less Daily Computer Use is Related to Smaller Hippocampal Volumes in Cognitively Intact Elderly.

    PubMed

    Silbert, Lisa C; Dodge, Hiroko H; Lahna, David; Promjunyakul, Nutta-On; Austin, Daniel; Mattek, Nora; Erten-Lyons, Deniz; Kaye, Jeffrey A

    2016-01-01

    Computer use is becoming a common activity in the daily life of older individuals and declines over time in those with mild cognitive impairment (MCI). The relationship between daily computer use (DCU) and imaging markers of neurodegeneration is unknown. The objective of this study was to examine the relationship between average DCU and volumetric markers of neurodegeneration on brain MRI. Cognitively intact volunteers enrolled in the Intelligent Systems for Assessing Aging Change study underwent MRI. Total in-home computer use per day was calculated using mouse movement detection and averaged over a one-month period surrounding the MRI. Spearman's rank order correlation (univariate analysis) and linear regression models (multivariate analysis) examined hippocampal, gray matter (GM), white matter hyperintensity (WMH), and ventricular cerebral spinal fluid (vCSF) volumes in relation to DCU. A voxel-based morphometry analysis identified relationships between regional GM density and DCU. Twenty-seven cognitively intact participants used their computer for 51.3 minutes per day on average. Less DCU was associated with smaller hippocampal volumes (r = 0.48, p = 0.01), but not total GM, WMH, or vCSF volumes. After adjusting for age, education, and gender, less DCU remained associated with smaller hippocampal volume (p = 0.01). Voxel-wise analysis demonstrated that less daily computer use was associated with decreased GM density in the bilateral hippocampi and temporal lobes. Less daily computer use is associated with smaller brain volume in regions that are integral to memory function and known to be involved early with Alzheimer's pathology and conversion to dementia. Continuous monitoring of daily computer use may detect signs of preclinical neurodegeneration in older individuals at risk for dementia.

  7. Accuracy of the Generalized Self-Consistent Method in Modelling the Elastic Behaviour of Periodic Composites

    NASA Technical Reports Server (NTRS)

    Walker, Kevin P.; Freed, Alan D.; Jordan, Eric H.

    1993-01-01

    Local stress and strain fields in the unit cell of an infinite, two-dimensional, periodic fibrous lattice have been determined by an integral equation approach. The effect of the fibres is assimilated to an infinite two-dimensional array of fictitious body forces in the matrix constituent phase of the unit cell. By subtracting a volume averaged strain polarization term from the integral equation we effectively embed a finite number of unit cells in a homogenized medium in which the overall stress and strain correspond to the volume averaged stress and strain of the constrained unit cell. This paper demonstrates that the zeroth term in the governing integral equation expansion, which embeds one unit cell in the homogenized medium, corresponds to the generalized self-consistent approximation. By comparing the zeroth term approximation with higher order approximations to the integral equation summation, both the accuracy of the generalized self-consistent composite model and the rate of convergence of the integral summation can be assessed. Two example composites are studied. For a tungsten/copper elastic fibrous composite the generalized self-consistent model is shown to provide accurate, effective, elastic moduli and local field representations. The local elastic transverse stress field within the representative volume element of the generalized self-consistent method is shown to be in error by much larger amounts for a composite with periodically distributed voids, but homogenization leads to a cancelling of errors, and the effective transverse Young's modulus of the voided composite is shown to be in error by only 23% at a void volume fraction of 75%.

  8. Charging and Transport Dynamics of a Flow-Through Electrode Capacitive Deionization System.

    PubMed

    Qu, Yatian; Campbell, Patrick G; Hemmatifar, Ali; Knipe, Jennifer M; Loeb, Colin K; Reidy, John J; Hubert, Mckenzie A; Stadermann, Michael; Santiago, Juan G

    2018-01-11

    We present a study of the interplay among electric charging rate, capacitance, salt removal, and mass transport in "flow-through electrode" capacitive deionization (CDI) systems. We develop two models describing coupled transport and electro-adsorption/desorption which capture salt removal dynamics. The first model is a simplified, unsteady zero-dimensional volume-averaged model which identifies dimensionless parameters and figures of merits associated with cell performance. The second model is a higher fidelity area-averaged model which captures both spatial and temporal responses of charging. We further conducted an experimental study of these dynamics and considered two salt transport regimes: (1) advection-limited regime and (2) dispersion-limited regime. We use these data to validate models. The study shows that, in the advection-limited regime, differential charge efficiency determines the salt adsorption at the early stage of the deionization process. Subsequently, charging transitions to a quasi-steady state where salt removal rate is proportional to applied current scaled by the inlet flow rate. In the dispersion-dominated regime, differential charge efficiency, cell volume, and diffusion rates govern adsorption dynamics and flow rate has little effect. In both regimes, the interplay among mass transport rate, differential charge efficiency, cell capacitance, and (electric) charging current governs salt removal in flow-through electrode CDI.

  9. Modeling of internal and near-nozzle flow for a GDI fuel injector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saha, Kaushik; Som, Sibendu; Battistoni, Michele

    A numerical study of two-phase flow inside the nozzle holes and the issuing spray jets for a multi-hole direct injection gasoline injector has been presented in this work. The injector geometry is representative of the Spray G nozzle, an eight-hole counterbore injector, from the Engine Combustion Network (ECN). Simulations have been carried out for a fixed needle lift. Effects of turbulence, compressibility and non-condensable gases have been considered in this work. Standard k -ε turbulence model has been used to model the turbulence. Homogeneous Relaxation Model (HRM) coupled with Volume of Fluid (VOF) approach has been utilized to capture themore » phase change phenomena inside and outside the injector nozzle. Three different boundary conditions for the outlet domain have been imposed to examine non-flashing and evaporative, non-flashing and non-evaporative and flashing conditions. Noticeable hole-to-hole variations have been observed in terms of mass flow rates for all the holes under all the operating conditions considered in this study. Inside the nozzle holes mild cavitation-like and in the near-nozzle region flash boiling phenomena have been predicted when liquid fuel is subjected to superheated ambiance. Under favorable conditions considerable flashing has been observed in the near-nozzle regions. An enormous volume is occupied by the gasoline vapor, stantial computational cost. Volume-averaging instead of mass-averaging is observed to be more effective, especially for finer mesh resolutions.« less

  10. Three-Dimensional Photography for Quantitative Assessment of Penile Volume-Loss Deformities in Peyronie's Disease.

    PubMed

    Margolin, Ezra J; Mlynarczyk, Carrie M; Mulhall, John P; Stember, Doron S; Stahl, Peter J

    2017-06-01

    Non-curvature penile deformities are prevalent and bothersome manifestations of Peyronie's disease (PD), but the quantitative metrics that are currently used to describe these deformities are inadequate and non-standardized, presenting a barrier to clinical research and patient care. To introduce erect penile volume (EPV) and percentage of erect penile volume loss (percent EPVL) as novel metrics that provide detailed quantitative information about non-curvature penile deformities and to study the feasibility and reliability of three-dimensional (3D) photography for measurement of quantitative penile parameters. We constructed seven penis models simulating deformities found in PD. The 3D photographs of each model were captured in triplicate by four observers using a 3D camera. Computer software was used to generate automated measurements of EPV, percent EPVL, penile length, minimum circumference, maximum circumference, and angle of curvature. The automated measurements were statistically compared with measurements obtained using water-displacement experiments, a tape measure, and a goniometer. Accuracy of 3D photography for average measurements of all parameters compared with manual measurements; inter-test, intra-observer, and inter-observer reliabilities of EPV and percent EPVL measurements as assessed by the intraclass correlation coefficient. The 3D images were captured in a median of 52 seconds (interquartile range = 45-61). On average, 3D photography was accurate to within 0.3% for measurement of penile length. It overestimated maximum and minimum circumferences by averages of 4.2% and 1.6%, respectively; overestimated EPV by an average of 7.1%; and underestimated percent EPVL by an average of 1.9%. All inter-test, inter-observer, and intra-observer intraclass correlation coefficients for EPV and percent EPVL measurements were greater than 0.75, reflective of excellent methodologic reliability. By providing highly descriptive and reliable measurements of penile parameters, 3D photography can empower researchers to better study volume-loss deformities in PD and enable clinicians to offer improved clinical assessment, communication, and documentation. This is the first study to apply 3D photography to the assessment of PD and to accurately measure the novel parameters of EPV and percent EPVL. This proof-of-concept study is limited by the lack of data in human subjects, which could present additional challenges in obtaining reliable measurements. EPV and percent EPVL are novel metrics that can be quickly, accurately, and reliably measured using computational analysis of 3D photographs and can be useful in describing non-curvature volume-loss deformities resulting from PD. Margolin EJ, Mlynarczyk CM, Muhall JP, et al. Three-Dimensional Photography for Quantitative Assessment of Penile Volume-Loss Deformities in Peyronie's Disease. J Sex Med 2017;14:829-833. Copyright © 2017 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.

  11. Microstructure and critical strain of dynamic recrystallization of 6082 aluminum alloy in thermal deformation

    NASA Astrophysics Data System (ADS)

    Ren, W. W.; Xu, C. G.; Chen, X. L.; Qin, S. X.

    2018-05-01

    Using high temperature compression experiments, true stress true strain curve of 6082 aluminium alloy were obtained at the temperature 460°C-560°C and the strain rate 0.01 s-1-10 s-1. The effects of deformation temperature and strain rate on the microstructure are investigated; (‑∂lnθ/∂ε) ‑ ε curves are plotted based on σ-ε curve. Critical strains of dynamic recrystallization of 6082 aluminium alloy model were obtained. The results showed lower strain rates were beneficial to increase the volume fraction of recrystallization, the average recrystallized grain size was coarse; High strain rates are beneficial to refine average grain size, the volume fraction of dynamic recrystallized grain is less than that by using low strain rates. High temperature reduced the dislocation density and provided less driving force for recrystallization so that coarse grains remained. Dynamic recrystallization critical strain model and thermal experiment results can effectively predict recrystallization critical point of 6082 aluminium alloy during thermal deformation.

  12. Quantification of the thorax-to-abdomen breathing ratio for breathing motion modeling.

    PubMed

    White, Benjamin M; Zhao, Tianyu; Lamb, James; Bradley, Jeffrey D; Low, Daniel A

    2013-06-01

    The purpose of this study was to develop a methodology to quantitatively measure the thorax-to-abdomen breathing ratio from a 4DCT dataset for breathing motion modeling and breathing motion studies. The thorax-to-abdomen breathing ratio was quantified by measuring the rate of cross-sectional volume increase throughout the thorax and abdomen as a function of tidal volume. Twenty-six 16-slice 4DCT patient datasets were acquired during quiet respiration using a protocol that acquired 25 ciné scans at each couch position. Fifteen datasets included data from the neck through the pelvis. Tidal volume, measured using a spirometer and abdominal pneumatic bellows, was used as breathing-cycle surrogates. The cross-sectional volume encompassed by the skin contour when compared for each CT slice against the tidal volume exhibited a nearly linear relationship. A robust iteratively reweighted least squares regression analysis was used to determine η(i), defined as the amount of cross-sectional volume expansion at each slice i per unit tidal volume. The sum Ση(i) throughout all slices was predicted to be the ratio of the geometric expansion of the lung and the tidal volume; 1.11. The Xiphoid process was selected as the boundary between the thorax and abdomen. The Xiphoid process slice was identified in a scan acquired at mid-inhalation. The imaging protocol had not originally been designed for purposes of measuring the thorax-to-abdomen breathing ratio so the scans did not extend to the anatomy with η(i) = 0. Extrapolation of η(i)-η(i) = 0 was used to include the entire breathing volume. The thorax and abdomen regions were individually analyzed to determine the thorax-to-abdomen breathing ratios. There were 11 image datasets that had been scanned only through the thorax. For these cases, the abdomen breathing component was equal to 1.11 - Ση(i) where the sum was taken throughout the thorax. The average Ση(i) for thorax and abdomen image datasets was found to be 1.20 ± 0.17, close to the expected value of 1.11. The thorax-to-abdomen breathing ratio was 0.32 ± 0.24. The average Ση(i) was 0.26 ± 0.14 in the thorax and 0.93 ± 0.22 in the abdomen. In the scan datasets that encompassed only the thorax, the average Ση(i) was 0.21 ± 0.11. A method to quantify the relationship between abdomen and thoracic breathing was developed and characterized.

  13. Macroscopic balance model for wave rotors

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.

    1996-01-01

    A mathematical model for multi-port wave rotors is described. The wave processes that effect energy exchange within the rotor passage are modeled using one-dimensional gas dynamics. Macroscopic mass and energy balances relate volume-averaged thermodynamic properties in the rotor passage control volume to the mass, momentum, and energy fluxes at the ports. Loss models account for entropy production in boundary layers and in separating flows caused by blade-blockage, incidence, and gradual opening and closing of rotor passages. The mathematical model provides a basis for predicting design-point wave rotor performance, port timing, and machine size. Model predictions are evaluated through comparisons with CFD calculations and three-port wave rotor experimental data. A four-port wave rotor design example is provided to demonstrate model applicability. The modeling approach is amenable to wave rotor optimization studies and rapid assessment of the trade-offs associated with integrating wave rotors into gas turbine engine systems.

  14. Study on highway transportation greenhouse effect external cost estimation in China

    NASA Astrophysics Data System (ADS)

    Chu, Chunchao; Pan, Fengming

    2017-03-01

    This paper focuses on estimating highway transportation greenhouse gas emission volume and greenhouse gas external cost in China. At first, composition and characteristics of greenhouse gases were analysed about highway transportation emissions. Secondly, an improved model of emission volume was presented on basis of highway transportation energy consumption, which may be calculated by virtue of main affecting factors such as the annual average operation miles of each type of the motor vehicles and the unit consumption level. the model of emission volume was constructed which considered not only the availability of energy consumption statistics of highway transportation but also the greenhouse gas emission factors of various fuel types issued by IPCC. Finally, the external cost estimation model was established about highway transportation greenhouse gas emission which combined emission volume with the unit external cost of CO2 emissions. An example was executed to confirm presented model which ranged from 2011 to 2015 Year in China. The calculated result shows that the highway transportation total emission volume and greenhouse gas external cost are growing up, but the unit turnover external cost is steadily declining. On the whole overall, the situation is still grim about highway transportation greenhouse gas emission, and the green transportation strategy should be put into effect as soon as possible.

  15. Design of Particulate-Reinforced Composite Materials

    PubMed Central

    Muc, Aleksander; Barski, Marek

    2018-01-01

    A microstructure-based model is developed to study the effective anisotropic properties (magnetic, dielectric or thermal) of two-phase particle-filled composites. The Green’s function technique and the effective field method are used to theoretically derive the homogenized (averaged) properties for a representative volume element containing isolated inclusion and infinite, chain-structured particles. Those results are compared with the finite element approximations conducted for the assumed representative volume element. In addition, the Maxwell–Garnett model is retrieved as a special case when particle interactions are not considered. We also give some information on the optimal design of the effective anisotropic properties taking into account the shape of magnetic particles. PMID:29401678

  16. Modelling the impact of retention-detention units on sewer surcharge and peak and annual runoff reduction.

    PubMed

    Locatelli, Luca; Gabriel, Søren; Mark, Ole; Mikkelsen, Peter Steen; Arnbjerg-Nielsen, Karsten; Taylor, Heidi; Bockhorn, Britta; Larsen, Hauge; Kjølby, Morten Just; Blicher, Anne Steensen; Binning, Philip John

    2015-01-01

    Stormwater management using water sensitive urban design is expected to be part of future drainage systems. This paper aims to model the combination of local retention units, such as soakaways, with subsurface detention units. Soakaways are employed to reduce (by storage and infiltration) peak and volume stormwater runoff; however, large retention volumes are required for a significant peak reduction. Peak runoff can therefore be handled by combining detention units with soakaways. This paper models the impact of retrofitting retention-detention units for an existing urbanized catchment in Denmark. The impact of retrofitting a retention-detention unit of 3.3 m³/100 m² (volume/impervious area) was simulated for a small catchment in Copenhagen using MIKE URBAN. The retention-detention unit was shown to prevent flooding from the sewer for a 10-year rainfall event. Statistical analysis of continuous simulations covering 22 years showed that annual stormwater runoff was reduced by 68-87%, and that the retention volume was on average 53% full at the beginning of rain events. The effect of different retention-detention volume combinations was simulated, and results showed that allocating 20-40% of a soakaway volume to detention would significantly increase peak runoff reduction with a small reduction in the annual runoff.

  17. Rock Fracture Toughness Under Mode II Loading: A Theoretical Model Based on Local Strain Energy Density

    NASA Astrophysics Data System (ADS)

    Rashidi Moghaddam, M.; Ayatollahi, M. R.; Berto, F.

    2018-01-01

    The values of mode II fracture toughness reported in the literature for several rocks are studied theoretically by using a modified criterion based on strain energy density averaged over a control volume around the crack tip. The modified criterion takes into account the effect of T-stress in addition to the singular terms of stresses/strains. The experimental results are related to mode II fracture tests performed on the semicircular bend and Brazilian disk specimens. There are good agreements between theoretical predictions using the generalized averaged strain energy density criterion and the experimental results. The theoretical results reveal that the value of mode II fracture toughness is affected by the size of control volume around the crack tip and also the magnitude and sign of T-stress.

  18. Safety modeling of urban arterials in Shanghai, China.

    PubMed

    Wang, Xuesong; Fan, Tianxiang; Chen, Ming; Deng, Bing; Wu, Bing; Tremont, Paul

    2015-10-01

    Traffic safety on urban arterials is influenced by several key variables including geometric design features, land use, traffic volume, and travel speeds. This paper is an exploratory study of the relationship of these variables to safety. It uses a comparatively new method of measuring speeds by extracting GPS data from taxis operating on Shanghai's urban network. This GPS derived speed data, hereafter called Floating Car Data (FCD) was used to calculate average speeds during peak and off-peak hours, and was acquired from samples of 15,000+ taxis traveling on 176 segments over 18 major arterials in central Shanghai. Geometric design features of these arterials and surrounding land use characteristics were obtained by field investigation, and crash data was obtained from police reports. Bayesian inference using four different models, Poisson-lognormal (PLN), PLN with Maximum Likelihood priors (PLN-ML), hierarchical PLN (HPLN), and HPLN with Maximum Likelihood priors (HPLN-ML), was used to estimate crash frequencies. Results showed the HPLN-ML models had the best goodness-of-fit and efficiency, and models with ML priors yielded estimates with the lowest standard errors. Crash frequencies increased with increases in traffic volume. Higher average speeds were associated with higher crash frequencies during peak periods, but not during off-peak periods. Several geometric design features including average segment length of arterial, number of lanes, presence of non-motorized lanes, number of access points, and commercial land use, were positively related to crash frequencies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Comparison study of portable bladder scanner versus cone-beam CT scan for measuring bladder volumes in post-prostatectomy patients undergoing radiotherapy.

    PubMed

    Ung, K A; White, R; Mathlum, M; Mak-Hau, V; Lynch, R

    2014-01-01

    In post-prostatectomy radiotherapy to the prostatic bed, consistent bladder volume is essential to maintain the position of treatment target volume. We assessed the differences between bladder volume readings from a portable bladder scanner (BS-V) and those obtained from planning CT (CT-V) or cone-beam CT (CBCT-V). Interfraction bladder volume variation was also determined. BS-V was recorded before and after planning CT or CBCT. The percentage differences between the readings using the two imaging modalities, standard deviations and 95% confidence intervals were determined. Data were analysed for the whole patient cohort and separately for the older BladderScan™ BVI3000 and newer BVI9400 model. Interfraction bladder volume variation was determined from the percentage difference between the CT-V and CBCT-V. Treatment duration, incorporating the time needed for BS and CBCT, was recorded. Fourteen patients were enrolled, producing 133 data sets for analysis. BS-V was taken using the BVI9400 in four patients (43 data sets). The mean BS-V was 253.2 mL, and the mean CT-V or CBCT-V was 199 cm(3). The mean percentage difference between the two modalities was 19.7% (SD 42.2; 95%CI 12.4 to 26.9). The BVI9400 model produced more consistent readings, with a mean percentage difference of -6.2% (SD 27.8; 95% CI -14.7 to -2.4%). The mean percentage difference between CT-V and CBCT-V was 31.3% (range -48% to 199.4%). Treatment duration from time of first BS reading to CBCT was, on average, 12 min (range 6-27). The BS produces bladder volume readings of an average 19.7% difference from CT-V or CBCT-V and can potentially be used to screen for large interfraction bladder volume variations in radiotherapy to prostatic bed. The observed interfraction bladder volume variation suggests the need to improve bladder volume consistency. Incorporating the BS into practice is feasible. © 2014 The Royal Australian and New Zealand College of Radiologists.

  20. Tracking lung tissue motion and expansion/compression with inverse consistent image registration and spirometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Gary E.; Song, Joo Hyun; Lu, Wei

    2007-06-15

    Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction ofmore » lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log-Jacobian value and the air flow rate correlated well (R{sup 2}=0.858 on average for the entire lung). The correlation for the fifth individual was not as good (R{sup 2}=0.377 on average for the entire lung) and can be explained by the small variation in tidal volume for this individual. The correlation of the average log-Jacobian value and the air flow rate for images near the diaphragm correlated well in all five individuals (R{sup 2}=0.943 on average). These preliminary results indicate a strong correlation between the expansion/compression of the lung measured by image registration and the air flow rate measured by spirometry. Predicting the location, motion, and compression/expansion of the tumor and normal tissue using image registration and spirometry could have many important benefits for radiotherapy treatment. These benefits include reducing radiation dose to normal tissue, maximizing dose to the tumor, improving patient care, reducing treatment cost, and increasing patient throughput.« less

  1. Tracking lung tissue motion and expansion/compression with inverse consistent image registration and spirometry.

    PubMed

    Christensen, Gary E; Song, Joo Hyun; Lu, Wei; El Naqa, Issam; Low, Daniel A

    2007-06-01

    Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction of lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log-Jacobian value and the air flow rate correlated well (R2 = 0.858 on average for the entire lung). The correlation for the fifth individual was not as good (R2 = 0.377 on average for the entire lung) and can be explained by the small variation in tidal volume for this individual. The correlation of the average log-Jacobian value and the air flow rate for images near the diaphragm correlated well in all five individuals (R2 = 0.943 on average). These preliminary results indicate a strong correlation between the expansion/compression of the lung measured by image registration and the air flow rate measured by spirometry. Predicting the location, motion, and compression/expansion of the tumor and normal tissue using image registration and spirometry could have many important benefits for radiotherapy treatment. These benefits include reducing radiation dose to normal tissue, maximizing dose to the tumor, improving patient care, reducing treatment cost, and increasing patient throughput.

  2. To what extent do long-duration high-volume dam releases influence river-aquifer interactions? A case study in New South Wales, Australia

    NASA Astrophysics Data System (ADS)

    Graham, P. W.; Andersen, M. S.; McCabe, M. F.; Ajami, H.; Baker, A.; Acworth, I.

    2015-03-01

    Long-duration high-volume dam releases are unique anthropogenic events with no naturally occurring equivalents. The impact from such dam releases on a downstream Quaternary alluvial aquifer in New South Wales, Australia, is assessed. It is observed that long-duration (>26 days), high-volume dam releases (>8,000 ML/day average) result in significant variations in river-aquifer interactions. These variations include a flux from the river to the aquifer up to 6.3 m3/day per metre of bank (at distances of up to 330 m from the river bank), increased extent and volume of recharge/bank storage, and a long-term (>100 days) reversal of river-aquifer fluxes. In contrast, during lower-volume events (<2,000 ML/day average) the flux was directed from the aquifer to the river at rates of up to 1.6 m3/day per metre of bank. A groundwater-head prediction model was constructed and river-aquifer fluxes were calculated; however, predicted fluxes from this method showed poor correlation to fluxes calculated using actual groundwater heads. Long-duration high-volume dam releases have the potential to skew estimates of long-term aquifer resources and detrimentally alter the chemical and physical properties of phreatic aquifers flanking the river. The findings have ramifications for improved integrated management of dam systems and downstream aquifers.

  3. No evidence of a threshold in traffic volume affecting road-kill mortality at a large spatio-temporal scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grilo, Clara, E-mail: clarabentesgrilo@gmail.com; Centro Brasileiro de Estudos em Ecologia de Estradas, Departamento de Biologia, Universidade Federal de Lavras, Campus Universitário, 37200-000 Lavras, Minas Gerais; Ferreira, Flavio Zanchetta

    Previous studies have found that the relationship between wildlife road mortality and traffic volume follows a threshold effect on low traffic volume roads. We aimed at evaluating the response of several species to increasing traffic intensity on highways over a large geographic area and temporal period. We used data of four terrestrial vertebrate species with different biological and ecological features known by their high road-kill rates: the barn owl (Tyto alba), hedgehog (Erinaceus europaeus), red fox (Vulpes vulpes) and European rabbit (Oryctolagus cuniculus). Additionally, we checked whether road-kill likelihood varies when traffic patterns depart from the average. We used annualmore » average daily traffic (AADT) and road-kill records observed along 1000 km of highways in Portugal over seven consecutive years (2003–2009). We fitted candidate models using Generalized Linear Models with a binomial distribution through a sample unit of 1 km segments to describe the effect of traffic on the probability of finding at least one victim in each segment during the study. We also assigned for each road-kill record the traffic of that day and the AADT on that year to test for differences using Paired Student's t-test. Mortality risk declined significantly with traffic volume but varied among species: the probability of finding road-killed red foxes and rabbits occurs up to moderate traffic volumes (< 20,000 AADT) whereas barn owls and hedgehogs occurred up to higher traffic volumes (40,000 AADT). Perception of risk may explain differences in responses towards high traffic highway segments. Road-kill rates did not vary significantly when traffic intensity departed from the average. In summary, we did not find evidence of traffic thresholds for the analysed species and traffic intensities. We suggest mitigation measures to reduce mortality be applied in particular on low traffic roads (< 5000 AADT) while additional measures to reduce barrier effects should take into account species-specific behavioural traits. - Highlights: • Traffic and road-kills were analysed along 1000 km of highways over seven years. • Mortality risk declined significantly with traffic volume. • Perception of risk may explain different responses towards high traffic sections. • Reducing barrier effects should take into account species behavioural traits.« less

  4. Streamflow simulation studies of the Hillsborough, Alafia, and Anclote Rivers, west-central Florida

    USGS Publications Warehouse

    Turner, J.F.

    1979-01-01

    A modified version of the Georgia Tech Watershed Model was applied for the purpose of flow simulation in three large river basins of west-central Florida. Calibrations were evaluated by comparing the following synthesized and observed data: annual hydrographs for the 1959, 1960, 1973 and 1974 water years, flood hydrographs (maximum daily discharge and flood volume), and long-term annual flood-peak discharges (1950-72). Annual hydrographs, excluding the 1973 water year, were compared using average absolute error in annual runoff and daily flows and correlation coefficients of monthly and daily flows. Correlations coefficients for simulated and observed maximum daily discharges and flood volumes used for calibrating range from 0.91 to 0.98 and average standard errors of estimate range from 18 to 45 percent. Correlation coefficients for simulated and observed annual flood-peak discharges range from 0.60 to 0.74 and average standard errors of estimate range from 33 to 44 percent. (Woodard-USGS)

  5. Morphologic changes of the nasal cavity induced by rapid maxillary expansion: a study on 3-dimensional computed tomography models.

    PubMed

    Haralambidis, Adam; Ari-Demirkaya, Arzu; Acar, Ahu; Küçükkeleş, Nazan; Ateş, Mustafa; Ozkaya, Selin

    2009-12-01

    The aim of this study was to evaluate the effect of rapid maxillary expansion on the volume of the nasal cavity by using computed tomography. The sample consisted of 24 patients (10 boys, 14 girls) in the permanent dentition who had maxillary constriction and bilateral posterior crossbite. Ten patients had skeletal Class I and 14 had Class II relationships. Skeletal maturity was assessed with the modified cervical vertebral maturation method. Computed tomograms were taken before expansion and at the end of the 3-month retention period, after active expansion. The tomograms were analyzed by Mimics software (version 10.11, Materialise Medical Co, Leuven, Belgium) to reconstruct 3-dimensional images and calculate the volume of the nasal cavities before and after expansion. A significant (P = 0.000) average increase of 11.3% in nasal volume was found. Sex, growth, and skeletal relationship did not influence measurements or response to treatment. A significant difference was found in the volume increase between the Class I and Class II patients, but it was attributed to the longer expansion period of the latter. Therefore, rapid maxillary expansion induces a significant average increase of the nasal volume and consequently can increase nasal permeability and establish a predominant nasal respiration pattern.

  6. Gas hydrate volume estimations on the South Shetland continental margin, Antarctic Peninsula

    USGS Publications Warehouse

    Jin, Y.K.; Lee, M.W.; Kim, Y.; Nam, S.H.; Kim, K.J.

    2003-01-01

    Multi-channel seismic data acquired on the South Shetland margin, northern Antarctic Peninsula, show that Bottom Simulating Reflectors (BSRs) are widespread in the area, implying large volumes of gas hydrates. In order to estimate the volume of gas hydrate in the area, interval velocities were determined using a 1-D velocity inversion method and porosities were deduced from their relationship with sub-bottom depth for terrigenous sediments. Because data such as well logs are not available, we made two baseline models for the velocities and porosities of non-gas hydrate-bearing sediments in the area, considering the velocity jump observed at the shallow sub-bottom depth due to joint contributions of gas hydrate and a shallow unconformity. The difference between the results of the two models is not significant. The parameters used to estimate the total volume of gas hydrate in the study area were 145 km of total length of BSRs identified on seismic profiles, 350 m thickness and 15 km width of gas hydrate-bearing sediments, and 6.3% of the average volume gas hydrate concentration (based on the second baseline model). Assuming that gas hydrates exist only where BSRs are observed, the total volume of gas hydrates along the seismic profiles in the area is about 4.8 ?? 1010 m3 (7.7 ?? 1012 m3 volume of methane at standard temperature and pressure).

  7. An in vitro comparison of tracheostomy tube cuffs

    PubMed Central

    Maguire, Seamus; Haury, Frances; Jew, Korinne

    2015-01-01

    Introduction The Shiley™ Flexible adult tracheostomy tube with TaperGuard™ cuff has been designed through its geometry, materials, diameter, and wall thickness to minimize micro-aspiration of fluids past the cuff and to provide an effective air seal in the trachea while also minimizing the risk of excessive contact pressure on the tracheal mucosa. The cuff also has a deflated profile that may allow for easier insertion through the stoma site. This unique design is known as the TaperGuard™ cuff. The purpose of the observational, in vitro study reported here was to compare the TaperGuard™ taper-shaped cuff to a conventional high-volume low-pressure cylindrical-shaped cuff (Shiley™ Disposable Inner Cannula Tracheostomy Tube [DCT]) with respect to applied tracheal wall pressure, air and fluid sealing efficacy, and insertion force. Methods Three sizes of tracheostomy tubes with the two cuff types were placed in appropriately sized tracheal models and lateral wall pressure was measured via pressure-sensing elements on the inner surface. Fluid sealing performance was assessed by inflating the cuffs within the tracheal models (25 cmH2O), instilling water above the cuff, and measuring fluid leakage past the cuff. To measure air leak, tubes were attached to a test lung and ventilator, and leak was calculated by subtracting the average exhaled tidal volume from the average delivered tidal volume. A tensile test machine was used to measure insertion force for each tube with the cuff deflated to simulate clinical insertion through a stoma site. Results The average pressure exerted on the lateral wall of the model trachea was lower for the taper-shaped cuff than for the cylindrical cuff under all test conditions (P<0.05). The taper-shaped cuff also demonstrated a more even, lower pressure distribution along the lateral wall of the model trachea. The average air and fluid seal performance with the taper-shaped cuff was significantly improved, when compared to the cylindrical-shaped cuff, for each tube size tested (P<0.05). The insertion force for the taper-shaped cuff was ~40% less than that for the cylindrical-shaped cuff. Conclusion In a model trachea, the Shiley™ Flexible Adult tracheostomy tube with TaperGuard™ cuff, when compared to the Shiley™ Disposable Inner Cannula Tracheostomy tube with cylindrical cuff, exerted a lower average lateral wall pressure and a more evenly distributed pressure. In addition, it provided more effective fluid and air seals and required less force to insert. PMID:25960679

  8. Assessing the Use of Sunken Lanes for Water Retention in a Landscape

    NASA Astrophysics Data System (ADS)

    Zlatuška, Karel

    2012-12-01

    Newly-designed structures and landscaping elements are often used for flood protection. This article assesses the use of existing sunken lanes for retaining water in a landscape and the sedimentation of washed-off soil. The article also describes ways how to preserve or, at least minimally disrupt, existing biotopes and landscape segments. Geodetic data from one specific sunken lane in South Moravia in the Czech Republic were transferred to a digital terrain model; 9 models were subsequently generated, each with a different longitudinal sunken lane bed slope. Retention dams consisting of gabions were placed in them. The number of dams, the volume of structures made of steel gabions, and the retention area volume behind the dams were determined for each model specifically. It was determined that the number of dams, as well as their total volume, increased with the average longitudinal slope of the sunken lane bed. It was also discovered that the retention volume remained almost the same, as it only very slightly decreases with an increasing longitudinal slope.

  9. A computer simulation of free-volume distributions and related structural properties in a model lipid bilayer.

    PubMed Central

    Xiang, T X

    1993-01-01

    A novel combined approach of molecular dynamics (MD) and Monte Carlo simulations is developed to calculate various free-volume distributions as a function of position in a lipid bilayer membrane at 323 K. The model bilayer consists of 2 x 100 chain molecules with each chain molecule having 15 carbon segments and one head group and subject to forces restricting bond stretching, bending, and torsional motions. At a surface density of 30 A2/chain molecule, the probability density of finding effective free volume available to spherical permeants displays a distribution with two exponential components. Both pre-exponential factors, p1 and p2, remain roughly constant in the highly ordered chain region with average values of 0.012 and 0.00039 A-3, respectively, and increase to 0.049 and 0.0067 A-3 at the mid-plane. The first characteristic cavity size V1 is only weakly dependent on position in the bilayer interior with an average value of 3.4 A3, while the second characteristic cavity size V2 varies more dramatically from a plateau value of 12.9 A3 in the highly ordered chain region to 9.0 A3 in the center of the bilayer. The mean cavity shape is described in terms of a probability distribution for the angle at which the test permeant is in contact with one of and does not overlap with anyone of the chain segments in the bilayer. The results show that (a) free volume is elongated in the highly ordered chain region with its long axis normal to the bilayer interface approaching spherical symmetry in the center of the bilayer and (b) small free volume is more elongated than large free volume. The order and conformational structures relevant to the free-volume distributions are also examined. It is found that both overall and internal motions have comparable contributions to local disorder and couple strongly with each other, and the occurrence of kink defects has higher probability than predicted from an independent-transition model. Images FIGURE 1 PMID:8241390

  10. The Partial Molar Volume and Compressibility of the FeO Component in Model Basalts (Mixed CaAl2Si2O8-CaMgSi2O6-CaFeSi2O6 Liquids) at 0 GPa: evidence of Fe2+ in 6-fold coordination

    NASA Astrophysics Data System (ADS)

    Guo, X.; Lange, R. A.; Ai, Y.

    2010-12-01

    FeO is an important component in magmatic liquids and yet its partial molar volume at one bar is not as well known as that for Fe2O3 because of the difficulty of performing double-bob density measurements under reducing conditions. Moreover, there is growing evidence from spectroscopic studies that Fe2+ occurs in 4, 5, and 6-fold coordination in silicate melts, and it is expected that the partial molar volume and compressibility of the FeO component will vary accordingly. We have conducted both density and relaxed sound speed measurements on four liquids in the An-Di-Hd (CaAl2Si2O8-CaMgSi2O6-CaFeSi2O6) system: (1) Di-Hd (50:50), (2) An-Hd (50:50), (3) An-Di-Hd (33:33:33) and (4) Hd (100). Densities were measured between 1573 and 1838 K at one bar with the double-bob Archimedean method using molybdenum bobs and crucibles in a reducing gas (1%CO-99%Ar) environment. The sound speeds were measured under similar conditions with a frequency-sweep acoustic interferometer, and used to calculate isothermal compressibility. All the density data for the three multi-component (model basalt) liquids were combined with density data on SiO2-Al2O3-CaO-MgO-K2O-Na2O liquids (Lange, 1997) in a fit to a linear volume equation; the results lead to a partial molar volume (±1σ) for FeO =11.7 ± 0.3(±1σ) cm3/mol at 1723 K. This value is similar to that for crystalline FeO at 298 K (halite structure; 12.06 cm3/mol), which suggests an average Fe2+ coordination of ~6 in these model basalt compositions. In contrast, the fitted partial molar volume of FeO in pure hedenbergite liquid is 14.6 ± 0.3 at 1723 K, which is consistent with an average Fe2+ coordination of 4.3 derived from EXAFS spectroscopy (Rossano, 2000). Similarly, all the compressibility data for the three multi-component liquids were combined with compressibility data on SiO2-Al2O3-CaO-MgO liquids (Ai and Lange, 2008) in a fit to an ideal mixing model for melt compressibility; the results lead to a partial molar compressibility (±1σ) for FeO = 2.4 (± 0.3) 10-2 GPa-1 at 1723 K. In contrast, the compressibility of FeO in pure hedenbergite liquid is more than twice as large: 6.3 (± 0.2) 10-2 GPa-1. When these results are combined with density and sound speed data on CaO-FeO-SiO2 liquids at one bar (Guo et al., 2009), a systematic and linear variation between the partial molar volume and compressibility of the FeO component is obtained, which appears to track changes in the average Fe2+ coordination in these liquids. Therefore, the three most important conclusions of this study are: (1) ideal mixing of volume and compressibility does not occur for all FeO-bearing magmatic liquids, owing to changes in Fe2+ coordination, (2) the partial molar volume and compressibility of FeO varies linearly and systematically with Fe2+ coordination, and (3) ideal mixing of volume and compressibility does occur among the three mixed An-Di-Hd liquids, presumably because of a common, average Fe2+ coordination of ~6.

  11. Creatinine generation from kinetic modeling with or without postdialysis serum creatinine measurement: results from the HEMO study.

    PubMed

    Daugirdas, John T; Depner, Thomas A

    2017-11-01

    A convenient method to estimate the creatinine generation rate and measures of creatinine clearance in hemodialysis patients using formal kinetic modeling and standard pre- and postdialysis blood samples has not been described. We used data from 366 dialysis sessions characterized during follow-up month 4 of the HEMO study, during which cross-dialyzer clearances for both urea and creatinine were available. Blood samples taken at 1 h into dialysis and 30 min and 60 min after dialysis were used to determine how well a two-pool kinetic model could predict creatinine concentrations and other kinetic parameters, including the creatinine generation rate. An extrarenal creatinine clearance of 0.038 l/kg/24 h was included in the model. Diffusive cross-dialyzer clearances of urea [230 (SD 37 mL/min] correlated well (R2 = 0.78) with creatinine clearances [164 (SD 30) mL/min]. When the effective diffusion volume flow rate was set at 0.791 times the blood flow rate for the cross-dialyzer clearance measurements at 1 h into dialysis, the mean calculated volume of creatinine distribution averaged 29.6 (SD 7.2) L], compared with 31.6 (SD 7.0) L for urea (P < 0.01). The modeled creatinine generation rate [1183 (SD 463) mg/day] averaged 100.1 % (SD 29; median 99.3) of that predicted in nondialysis patients by an anthropometric equation. A simplified method for modeling the creatinine generation rate using the urea distribution volume and urea dialyzer clearance without use of the postdialysis serum creatinine measurement gave results for creatinine generation rate [1187 (SD 475) mg/day; that closely matched the value calculated using the formally modeled value, R2 = 0.971]. Our analysis confirms previous findings of similar distribution volumes for creatinine and urea. After taking extra-renal clearance into consideration, the creatinine generation rate in dialysis patients is similar to that in nondialysis patients. A simplified method based on urea clearance and urea distribution volume not requiring a postdialysis serum creatinine measurement can be used to yield creatinine generation rates that closely match those determined from standard modeling. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  12. Accurate Measurement of Small Airways on Low-Dose Thoracic CT Scans in Smokers

    PubMed Central

    Conradi, Susan H.; Atkinson, Jeffrey J.; Zheng, Jie; Schechtman, Kenneth B.; Senior, Robert M.; Gierada, David S.

    2013-01-01

    Background: Partial volume averaging and tilt relative to the scan plane on transverse images limit the accuracy of airway wall thickness measurements on CT scan, confounding assessment of the relationship between airway remodeling and clinical status in COPD. The purpose of this study was to assess the effect of partial volume averaging and tilt corrections on airway wall thickness measurement accuracy and on relationships between airway wall thickening and clinical status in COPD. Methods: Airway wall thickness measurements in 80 heavy smokers were obtained on transverse images from low-dose CT scan using the open-source program Airway Inspector. Measurements were corrected for partial volume averaging and tilt effects using an attenuation- and geometry-based algorithm and compared with functional status. Results: The algorithm reduced wall thickness measurements of smaller airways to a greater degree than larger airways, increasing the overall range. When restricted to analyses of airways with an inner diameter < 3.0 mm, for a theoretical airway of 2.0 mm inner diameter, the wall thickness decreased from 1.07 ± 0.07 to 0.29 ± 0.10 mm, and the square root of the wall area decreased from 3.34 ± 0.15 to 1.58 ± 0.29 mm, comparable to histologic measurement studies. Corrected measurements had higher correlation with FEV1, differed more between BMI, airflow obstruction, dyspnea, and exercise capacity (BODE) index scores, and explained a greater proportion of FEV1 variability in multivariate models. Conclusions: Correcting for partial volume averaging improves accuracy of airway wall thickness estimation, allowing direct measurement of the small airways to better define their role in COPD. PMID:23172175

  13. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Jinghao; Kim, Sung; Jabbour, Salma

    2010-03-15

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CTmore » (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to 6.54 mm for ASM. The volume overlap ratio ranged from 79% to 91% for ACRASM and from 44% to 80% for ASM. These data demonstrated that the segmentation results of ACRASM were in better agreement with the corresponding benchmarks than those of ASM. The developed registration algorithm was quantitatively evaluated by comparing the registered target volumes from the pCT to the benchmarks on the CBCT. The mean distance and the root mean square error ranged from 0.38 to 2.2 mm and from 0.45 to 2.36 mm, respectively, between the CBCT images and the registered pCT. The mean overlap ratio of the prostate volumes ranged from 85.2% to 95% after registration. The average time of the ACRASM-based segmentation was under 1 min. The average time of the global transformation was from 2 to 4 min on two 3D volumes and the average time of the local transformation was from 20 to 34 s on two deformable superquadrics mesh models. Conclusions: A novel and fast segmentation and deformable registration method was developed to capture the transformation between the planning and treatment images for external beam radiotherapy of prostate cancers. This method increases the computational efficiency and may provide foundation to achieve real time adaptive radiotherapy.« less

  14. Stone Attenuation Values Measured by Average Hounsfield Units and Stone Volume as Predictors of Total Laser Energy Required During Ureteroscopic Lithotripsy Using Holmium:Yttrium-Aluminum-Garnet Lasers.

    PubMed

    Ofude, Mitsuo; Shima, Takashi; Yotsuyanagi, Satoshi; Ikeda, Daisuke

    2017-04-01

    To evaluate the predictors of the total laser energy (TLE) required during ureteroscopic lithotripsy (URS) using the holmium:yttrium-aluminum-garnet (Ho:YAG) laser for a single ureteral stone. We retrospectively analyzed the data of 93 URS procedures performed for a single ureteral stone in our institution from November 2011 to September 2015. We evaluated the association between TLE and preoperative clinical data, such as age, sex, body mass index, and noncontrast computed tomographic findings, including stone laterality, location, maximum diameter, volume, stone attenuation values measured using average Hounsfield units (HUs), and presence of secondary signs (severe hydronephrosis, tissue rim sign, and perinephric stranding). The mean maximum stone diameter, volume, and average HUs were 9.2 ± 3.8 mm, 283.2 ± 341.4 mm 3 , and 863 ± 297, respectively. The mean TLE and operative time were 2.93 ± 3.27 kJ and 59.1 ± 28.1 minutes, respectively. Maximum stone diameter, volume, average HUs, severe hydronephrosis, and tissue rim sign were significantly correlated with TLE (Spearman's rho analysis). Stepwise multiple linear regression analysis defining stone volume, average HUs, severe hydronephrosis, and tissue rim sign as explanatory variables showed that stone volume and average HUs were significant predictors of TLE (standardized coefficients of 0.565 and 0.320, respectively; adjusted R 2  = 0.55, F = 54.7, P <.001). Stone attenuation values measured by average HUs and stone volume were strong predictors of TLE during URS using Ho:YAG laser procedures. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Temporal trends and volume-outcome associations after traumatic brain injury: a 12-year study in Taiwan.

    PubMed

    Shi, Hon-Yi; Hwang, Shiuh-Lin; Lee, King-Teh; Lin, Chih-Lung

    2013-04-01

    The purpose of this study was to evaluate temporal trends in traumatic brain injury (TBI); the impact of hospital volume and surgeon volume on length of stay (LOS), hospitalization cost, and in-hospital mortality rate; and to explore predictors of these outcomes in a nationwide population in Taiwan. This population-based patient cohort study retrospectively analyzed 16,956 patients who had received surgical treatment for TBI between 1998 and 2009. Bootstrap estimation was used to derive 95% confidence intervals for differences in effect sizes. Hierarchical linear regression models were used to predict outcomes. Patients treated in very-high-volume hospitals were more responsive than those treated in low-volume hospitals in terms of LOS (-0.11; 95% CI -0.20 to -0.03) and hospitalization cost (-0.28; 95% CI -0.49 to -0.06). Patients treated by high-volume surgeons were also more responsive than those treated by low-volume surgeons in terms of LOS (-0.19; 95% CI -0.37 to -0.01) and hospitalization cost (-0.43; 95% CI -0.81 to -0.05). The mean LOS was 24.3 days and the average LOS for very-high-volume hospitals and surgeons was 61% and 64% shorter, respectively, than that for low-volume hospitals and surgeons. The mean hospitalization cost was US $7,292.10, and the average hospitalization cost for very-high-volume hospitals and surgeons was 19% and 22% lower, respectively, than that for low-volume hospitals and surgeons. Advanced age, male sex, high Charlson Comorbidity Index score, treatment in a low-volume hospital, and treatment by a low-volume surgeon were significantly associated with adverse outcomes (p < 0.001). The data suggest that annual surgical volume is the key factor in surgical outcomes in patients with TBI. The results improve the understanding of medical resource allocation for this surgical procedure, and can help to formulate public health policies for optimizing hospital resource utilization for related diseases.

  16. Chesapeake Bay Hypoxic Volume Forecasts and Results

    USGS Publications Warehouse

    Evans, Mary Anne; Scavia, Donald

    2013-01-01

    Given the average Jan-May 2013 total nitrogen load of 162,028 kg/day, this summer's hypoxia volume forecast is 6.1 km3, slightly smaller than average size for the period of record and almost the same as 2012. The late July 2013 measured volume was 6.92 km3.

  17. Cosmological measure with volume averaging and the vacuum energy problem

    NASA Astrophysics Data System (ADS)

    Astashenok, Artyom V.; del Popolo, Antonino

    2012-04-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

  18. Diffusion of multiple species with excluded-volume effects.

    PubMed

    Bruna, Maria; Chapman, S Jonathan

    2012-11-28

    Stochastic models of diffusion with excluded-volume effects are used to model many biological and physical systems at a discrete level. The average properties of the population may be described by a continuum model based on partial differential equations. In this paper we consider multiple interacting subpopulations/species and study how the inter-species competition emerges at the population level. Each individual is described as a finite-size hard core interacting particle undergoing brownian motion. The link between the discrete stochastic equations of motion and the continuum model is considered systematically using the method of matched asymptotic expansions. The system for two species leads to a nonlinear cross-diffusion system for each subpopulation, which captures the enhancement of the effective diffusion rate due to excluded-volume interactions between particles of the same species, and the diminishment due to particles of the other species. This model can explain two alternative notions of the diffusion coefficient that are often confounded, namely collective diffusion and self-diffusion. Simulations of the discrete system show good agreement with the analytic results.

  19. Comparison of Integrated Radiation Transport Models with TEPC Measurements for the Average Quality Factors in Spaceflights

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Nikjoo, Hooshang; Dicello, John F.; Pisacane, Vincent; Cucinotta, Francis A.

    2007-01-01

    The purpose of this work is to test our theoretical model for the interpretation of radiation data measured in space. During the space missions astronauts are exposed to the complex field of radiation type and kinetic energies from galactic cosmic rays (GCR), trapped protons, and sometimes solar particle events (SPEs). The tissue equivalent proportional counter (TEPC) is a simple time-dependent approach for radiation monitoring for astronauts on board the International Space Station. Another and a newer approach to Microdosimetry is the use of silicon-on-insulator (SOI) technology launched on the MidSTAR-1 mission in low Earth orbit (LEO). In the radiation protection practice, the average quality factor of a radiation field is defined as a function of linear energy transfer (LET), Q(sub ave)(LET). However, TEPC measures the average quality factor as a function of the lineal energy y, Q(sub ave)(y), defined as the average energy deposition in a volume divided by the average chord length of the volume. Lineal energy, y, deviates from LET due to energy straggling, delta-ray escape or entry, and nuclear fragments produced in the detector volume. Monte Carlo track structure simulation was employed to obtain the response of a TEPC irradiated with charged particle for an equivalent site diameter of 1 micron of wall-less counter. The calculated data of the energy absorption in the wall-less counter were compiled for various y values for several ion types at various discrete projectile energy levels. For the simulation of TEPC response from the mixed radiation environments inside a spacecraft, such as, Space Shuttle and International Space Station, the complete microdosimetric TEPC response, f( y, E, Z), were calculated with the Monte Carlo theoretical results by using the first order Lagrangian interpolation for a monovariate function at a given y value (y = 0.1 keV/micron 5000 keV/micron) at any projectile energy level (E = 0.01 MeV/u to 50,000 MeV/u) of each specific radiation type (Z = 1 to 28). Because the anomalous response has been observed at large event sizes in the experiment due to the escape of energy out of sensitive volume by delta-rays and the entry of delta-rays from the high-density wall into the low-density gas-volume cavity, Monte Carlo simulation was also made for the response of a walled-TEPC with wall thickness 2 mm and density 1 g/cm(exp 3). The radius of cavity was set to 6.35 mm and a gas density 7.874 x 10(exp -5) g/cm(exp 3). The response of the walled- and the wall-less counters were compared. The average quality factor Q(sub ave)(y) for trapped protons on STS-89 demonstrated the good agreement between the model calculations and flight TEPC data as shown. Using an integrated space radiation model (this includes the transport codes HZETRN and BRYNTRN, the quantum nuclear interaction model QMSFRG) and the resultant response distribution functions of walled-TEPC from Monte-Carlo track simulations, we compared model calculations with walled-TEPC measurements from NASA missions in LEO and made predictions for the lunar and the Mars missions. The Q(sub ave)(y) values for the trapped or the solar protons ranged from 1.9-2.5. This over-estimates the Qave(LET) values which ranged from 1.4-1.6. Both quantities increase with shield thickness due to nuclear fragmentation. The Q(sub ave)(LET) for the complete GCR spectra was found to be 3.5-4.5, while flight TEPCs measured 2.9-3.4 for Q(sub ave)(y). The GCR values are decreasing with the shield thickness. Our analysis for a proper interpretation of data supports the use of TEPCs for monitoring space radiation environment.

  20. The Short-Term Effect of Weight Loss Surgery on Volumetric Breast Density and Fibroglandular Volume.

    PubMed

    Vohra, Nasreen A; Kachare, Swapnil D; Vos, Paul; Schroeder, Bruce F; Schuth, Olga; Suttle, Dylan; Fitzgerald, Timothy L; Wong, Jan H; Verbanac, Kathryn M

    2017-04-01

    Obesity and breast density are both associated with an increased risk of breast cancer and are potentially modifiable. Weight loss surgery (WLS) causes a significant reduction in the amount of body fat and a decrease in breast cancer risk. The effect of WLS on breast density and its components has not been documented. Here, we analyze the impact of WLS on volumetric breast density (VBD) and on each of its components (fibroglandular volume and breast volume) by using three-dimensional methods. Fibroglandular volume, breast volume, and their ratio, the VBD, were calculated from mammograms before and after WLS by using Volpara™ automated software. For the 80 women included, average body mass index decreased from 46.0 ± 7.22 to 33.7 ± 7.06 kg/m 2 . Mammograms were performed on average 11.6 ± 9.4 months before and 10.1 ± 7 months after WLS. There was a significant reduction in average breast volume (39.4 % decrease) and average fibroglandular volume (15.5 % decrease), and thus, the average VBD increased from 5.15 to 7.87 % (p < 1 × 10 -9 ) after WLS. When stratified by menopausal status and diabetic status, VBD increased significantly in all groups but only perimenopausal and postmenopausal women and non-diabetics experienced a significant reduction in fibroglandular volume. Breast volume and fibroglandular volume decreased, and VBD increased following WLS, with the most significant change observed in postmenopausal women and non-diabetics. Further studies are warranted to determine how physical and biological alterations in breast density components after WLS may impact breast cancer risk.

  1. Modeling the Capacitive Deionization Process in Dual-Porosity Electrodes

    DOE PAGES

    Gabitto, Jorge; Tsouris, Costas

    2016-04-28

    In many areas of the world, there is a need to increase water availability. Capacitive deionization (CDI) is an electrochemical water treatment process that can be a viable alternative for treating water and for saving energy. A model is presented to simulate the CDI process in heterogeneous porous media comprising two different pore sizes. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A two steps volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. A one-equationmore » model based on the principle of local equilibrium is derived. The constraints determining the range of application of the one-equation model are presented. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. The source terms that appear in the average equations are calculated using theoretical derivations. The global diffusivity is calculated by solving the closure problem.« less

  2. Using a traffic simulation model (VISSIM) with an emissions model (MOVES) to predict emissions from vehicles on a limited-access highway.

    PubMed

    Abou-Senna, Hatem; Radwan, Essam; Westerlund, Kurt; Cooper, C David

    2013-07-01

    The Intergovernmental Panel on Climate Change (IPCC) estimates that baseline global GHG emissions may increase 25-90% from 2000 to 2030, with carbon dioxide (CO2 emissions growing 40-110% over the same period. On-road vehicles are a major source of CO2 emissions in all the developed countries, and in many of the developing countries in the world. Similarly, several criteria air pollutants are associated with transportation, for example, carbon monoxide (CO), nitrogen oxides (NO(x)), and particulate matter (PM). Therefore, the need to accurately quantify transportation-related emissions from vehicles is essential. The new US. Environmental Protection Agency (EPA) mobile source emissions model, MOVES2010a (MOVES), can estimate vehicle emissions on a second-by-second basis, creating the opportunity to combine a microscopic traffic simulation model (such as VISSIM) with MOVES to obtain accurate results. This paper presents an examination of four different approaches to capture the environmental impacts of vehicular operations on a 10-mile stretch of Interstate 4 (I-4), an urban limited-access highway in Orlando, FL. First (at the most basic level), emissions were estimated for the entire 10-mile section "by hand" using one average traffic volume and average speed. Then three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link drive schedules (LDS), and second-by-second operating mode distributions (OPMODE). This paper analyzes how the various approaches affect predicted emissions of CO, NO(x), PM2.5, PM10, and CO2. The results demonstrate that obtaining precise and comprehensive operating mode distributions on a second-by-second basis provides more accurate emission estimates. Specifically, emission rates are highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach. Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. With MOVES, there is an opportunity for higher precision and accuracy. Integrating a microscopic traffic simulation model (such as VISSIM) with MOVES allows one to obtain precise and accurate emissions estimates. The proposed emission rate estimation process also can be extended to gridded emissions for ozone modeling, or to localized air quality dispersion modeling, where temporal and spatial resolution of emissions is essential to predict the concentration of pollutants near roadways.

  3. Analytical simulation of SPS system performance, volume 3, phase 3

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.; Lindsey, W. C.

    1980-01-01

    The simulation model for the Solar Power Satellite spaceantenna and the associated system imperfections are described. Overall power transfer efficiency, the key performance issue, is discussed as a function of the system imperfections. Other system performance measures discussed include average power pattern, mean beam gain reduction, and pointing error.

  4. A manpower calculus: the implications of SUO fellowship expansion on oncologic surgeon case volumes.

    PubMed

    See, William A

    2014-01-01

    Society of Urologic Oncology (SUO)-accredited fellowship programs have undergone substantial expansion. This study developed a mathematical model to estimate future changes in urologic oncologic surgeon (UOS) manpower and analyzed the effect of those changes on per-UOS case volumes. SUO fellowship program directors were queried as to the number of positions available on an annual basis. Current US UOS manpower was estimated from the SUO membership list. Future manpower was estimated on an annual basis by linear senescence of existing manpower combined with linear growth of newly trained surgeons. Case-volume estimates for the 4 surgical disease sites (prostate, kidney/renal pelvis, bladder, and testes) were obtained from the literature. The future number of major cases was determined from current volumes based upon the US population growth rates and the historic average annual change in disease incidence. Two models were used to predict future per-UOS major case volumes. Model 1 assumed the current distribution of cases between nononcologic surgeons and UOS would continue. Model 2 assumed a progressive redistribution of cases over time such that in 2043 100% of major urologic cancer cases would be performed by UOSs. Over the 30-year period to "manpower steady-state" SUO-accredited UOSs practicing in the United States have the potential to increase from approximately 600 currently to 1,650 in 2043. During this interval, case volumes are predicted to change 0.97-, 2.4-, 1.1-, and 1.5-fold for prostatectomy, nephrectomy, cystectomy, and retroperitoneal lymph node dissection, respectively. The ratio of future to current total annual case volumes is predicted to be 0.47 and 0.9 for models 1 and 2, respectively. The number of annual US practicing graduates necessary to achieve a future to current case-volume ratio greater than 1 is 25 and 49 in models 1 and 2, respectively. The current number of SUO fellowship trainees has the potential to decrease future per-UOS case volumes relative to current levels. Redistribution of existing case volume or a decrease in the annual number of trainees or both would be required to insure sufficient surgical volumes for skill maintenance and optimal patient outcomes. Published by Elsevier Inc.

  5. Scale dependence of entrainment-mixing mechanisms in cumulus clouds

    DOE PAGES

    Lu, Chunsong; Liu, Yangang; Niu, Shengjie; ...

    2014-12-17

    This work empirically examines the dependence of entrainment-mixing mechanisms on the averaging scale in cumulus clouds using in situ aircraft observations during the Routine Atmospheric Radiation Measurement Aerial Facility Clouds with Low Optical Water Depths Optical Radiative Observations (RACORO) field campaign. A new measure of homogeneous mixing degree is defined that can encompass all types of mixing mechanisms. Analysis of the dependence of the homogenous mixing degree on the averaging scale shows that, on average, the homogenous mixing degree decreases with increasing averaging scales, suggesting that apparent mixing mechanisms gradually approach from homogeneous mixing to extreme inhomogeneous mixing with increasingmore » scales. The scale dependence can be well quantified by an exponential function, providing first attempt at developing a scale-dependent parameterization for the entrainment-mixing mechanism. The influences of three factors on the scale dependence are further examined: droplet-free filament properties (size and fraction), microphysical properties (mean volume radius and liquid water content of cloud droplet size distributions adjacent to droplet-free filaments), and relative humidity of entrained dry air. It is found that the decreasing rate of homogeneous mixing degree with increasing averaging scales becomes larger with larger droplet-free filament size and fraction, larger mean volume radius and liquid water content, or higher relative humidity. The results underscore the necessity and possibility of considering averaging scale in representation of entrainment-mixing processes in atmospheric models.« less

  6. 40 CFR Table 6 to Subpart Cccc of... - Emission Limitations for Energy Recovery Units That Commenced Construction After June 4, 2010, or...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... parts per million dry volume Biomass—160 parts per million dry volume 30 day rolling average Carbon... concentration of 300 ppm or less for a biomass-fed boiler. Dioxins/furans (Total Mass Basis) No Total Mass Basis... Biomass—290 parts per million dry volumeCoal—340 parts per million dry volume 3-run average (1 hour...

  7. 40 CFR Table 6 to Subpart Cccc of... - Emission Limitations for Energy Recovery Units That Commenced Construction After June 4, 2010, or...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... parts per million dry volume Biomass—160 parts per million dry volume 30 day rolling average Carbon... concentration of 300 ppm or less for a biomass-fed boiler. Dioxins/furans (Total Mass Basis) No Total Mass Basis... Biomass—290 parts per million dry volumeCoal—340 parts per million dry volume 3-run average (1 hour...

  8. Chesapeake Bay hypoxic volume forecasts and results

    USGS Publications Warehouse

    Scavia, Donald; Evans, Mary Anne

    2013-01-01

    The 2013 Forecast - Given the average Jan-May 2013 total nitrogen load of 162,028 kg/day, this summer’s hypoxia volume forecast is 6.1 km3, slightly smaller than average size for the period of record and almost the same as 2012. The late July 2013 measured volume was 6.92 km3.

  9. 40 CFR 60.493 - Performance test and compliance provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... equivalent or alternative method. The owner or operator shall determine from company records the volume of... estimate the volume of coating used at each facility by using the average dry weight of coating, number of... acceptable to the Administrator. (i) Calculate the volume-weighted average of the total mass of VOC per...

  10. Water Budget of East Maui, Hawaii

    USGS Publications Warehouse

    Shade, Patricia J.

    1999-01-01

    Ground-water recharge is estimated from six monthly water budgets calculated using long-term average rainfall and streamflow data, estimated pan-evaporation and fog-drip data, and soil characteristics. The water-budget components are defined seasonally, through the use of monthly data, and spatially by broad climatic and geohydrologic areas, through the use of a geographic information system model. The long-term average water budget for east Maui was estimated for natural land-use conditions. The average rainfall, fog-drip, runoff, evapotranspiration, and ground-water recharge volumes for the east Maui study area are 2,246 Mgal/d, 323 Mgal/d, 771 Mgal/d, 735 Mgal/d, and 1,064 Mgal/d, respectively.

  11. Quantifying Long-Term Retention of Excised Fat Grafts: A Longitudinal, Retrospective Cohort Study of 108 Patients Followed for Up to 8.4 Years.

    PubMed

    Herly, Mikkel; Ørholt, Mathias; Glovinski, Peter V; Pipper, Christian B; Broholm, Helle; Poulsgaard, Lars; Fugleholm, Kåre; Thomsen, Carsten; Drzewiecki, Krzysztof T

    2017-05-01

    Predicting the degree of fat graft retention is essential when planning reconstruction or augmentation with free fat grafting. Most surgeons observe volume loss over time after fat grafting; however, the portion lost to resorption after surgery is still poorly defined, and the time to reach steady state is unknown. The authors compiled a retrospective, longitudinal cohort of patients with vestibular schwannoma who had undergone ablative surgery and reconstruction with excised fat between the years 2006 and 2015. Fat volume retention was quantified by computed tomography and magnetic resonance imaging and used to model a graft retention trajectory and determine the volumetric steady state. In addition, the authors evaluated the association between graft retention and secondary characteristics, such as sex and transplant volume. A total of 108 patients were included. The average baseline graft volume was 18.1 ± 4.8 ml. The average time to reach steady state was 806 days after transplantation. By this time, the average fat graft retention was 50.6 percent (95 percent CI, 46.4 to 54.7 percent). No statistically significant association was found between baseline graft volume and retention. Fat graft retention over time was significantly higher in men than in women (57.7 percent versus 44.5 percent; p < 0.001). The authors' data provide evidence that the time to reach fat graft volumetric steady state is considerably longer than previously expected. Fat grafts continue to shrink long after the initial hypoxia-induced tissue necrosis has been cleared, thus indicating that factors other than blood supply may be more influential for fat graft retention. Therapeutic, IV.

  12. Combining registration and active shape models for the automatic segmentation of the lymph node regions in head and neck CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Antong; Deeley, Matthew A.; Niermann, Kenneth J.

    2010-12-15

    Purpose: Intensity-modulated radiation therapy (IMRT) is the state of the art technique for head and neck cancer treatment. It requires precise delineation of the target to be treated and structures to be spared, which is currently done manually. The process is a time-consuming task of which the delineation of lymph node regions is often the longest step. Atlas-based delineation has been proposed as an alternative, but, in the authors' experience, this approach is not accurate enough for routine clinical use. Here, the authors improve atlas-based segmentation results obtained for level II-IV lymph node regions using an active shape model (ASM)more » approach. Methods: An average image volume was first created from a set of head and neck patient images with minimally enlarged nodes. The average image volume was then registered using affine, global, and local nonrigid transformations to the other volumes to establish a correspondence between surface points in the atlas and surface points in each of the other volumes. Once the correspondence was established, the ASMs were created for each node level. The models were then used to first constrain the results obtained with an atlas-based approach and then to iteratively refine the solution. Results: The method was evaluated through a leave-one-out experiment. The ASM- and atlas-based segmentations were compared to manual delineations via the Dice similarity coefficient (DSC) for volume overlap and the Euclidean distance between manual and automatic 3D surfaces. The mean DSC value obtained with the ASM-based approach is 10.7% higher than with the atlas-based approach; the mean and median surface errors were decreased by 13.6% and 12.0%, respectively. Conclusions: The ASM approach is effective in reducing segmentation errors in areas of low CT contrast where purely atlas-based methods are challenged. Statistical analysis shows that the improvements brought by this approach are significant.« less

  13. Does a pneumotach accurately characterize voice function?

    NASA Astrophysics Data System (ADS)

    Walters, Gage; Krane, Michael

    2016-11-01

    A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.

  14. Comparisons of Integrated Radiation Transport Models with Microdosimetry Data in Spaceflight

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Nikjoo, H.; Kim, M. Y.; Hu, X.; Dicello, J. F.; Pisacane, V. L.

    2006-01-01

    Astronauts are exposed to galactic cosmic rays (GCR), trapped protons, and possible solar particle events (SPE) during spaceflight. For such complicated mixtures of radiation types and kinetic energies, tissue equivalent proportional counters (TEPC's) represent a simple time-dependent approach for radiation monitoring. Of interest in radiation protection is the average quality factor of a radiation field defined as a function of linear energy transfer, LET, Q(sub ave)(LET). However TEPC's measure the average quality factors as a function of lineal energy (y), Q(sub ave)(y) defined as the average energy deposition in a volume divided by the average chord length of the volume. Lineal energy, y deviates from LET due to energy straggling, delta-ray escape or entry, and nuclear fragments produced in the detector. Using integrated space radiation models that includes the transport code HZETRN/BRYNTRN, the quantum nuclear interaction model, QMSFRG, and results from Monte-Carlo track simulations of TEPC's response to ions, we consider comparisons of model calculations to TEPC results from NASA missions in low Earth orbit and make predictions for lunar and Mars missions. Good agreement between the model and measured spectra from past NASA missions is found. A finding of this work is that TEPC's values for trapped or solar protons of Q(sub ave)(y) range from 1.9-2.5, overestimating Q(sub ave)(LET), which ranges from 1.4-1.6 with both quantities increasing with shielding depth due to nuclear secondaries Comparisons for the complete GCR spectra show that Q(sub ave)(LET) for GCR is approximately 3.5-4.5, while TEPC's measure 2.9-3.4 for Q(sub ave)(y) with the GCR values decreasing with depth as heavy ions are absorbed in shielding material. Our results support the use of TEPC's for space radiation environmental monitoring when computational analysis is used for proper data interpretation.

  15. Analysis of the variation in OCT measurements of a structural bottle neck for eye-brain transfer of visual information from 3D-volumes of the optic nerve head, PIMD-Average [02π

    NASA Astrophysics Data System (ADS)

    Söderberg, Per G.; Malmberg, Filip; Sandberg-Melin, Camilla

    2016-03-01

    The present study aimed to analyze the clinical usefulness of the thinnest cross section of the nerve fibers in the optic nerve head averaged over the circumference of the optic nerve head. 3D volumes of the optic nerve head of the same eye was captured at two different visits spaced in time by 1-4 weeks, in 13 subjects diagnosed with early to moderate glaucoma. At each visit 3 volumes containing the optic nerve head were captured independently with a Topcon OCT- 2000 system. In each volume, the average shortest distance between the inner surface of the retina and the central limit of the pigment epithelium around the optic nerve head circumference, PIMD-Average [02π], was determined semiautomatically. The measurements were analyzed with an analysis of variance for estimation of the variance components for subjects, visits, volumes and semi-automatic measurements of PIMD-Average [0;2π]. It was found that the variance for subjects was on the order of five times the variance for visits, and the variance for visits was on the order of 5 times higher than the variance for volumes. The variance for semi-automatic measurements of PIMD-Average [02π] was 3 orders of magnitude lower than the variance for volumes. A 95 % confidence interval for mean PIMD-Average [02π] was estimated to 1.00 +/-0.13 mm (D.f. = 12). The variance estimates indicate that PIMD-Average [02π] is not suitable for comparison between a onetime estimate in a subject and a population reference interval. Cross-sectional independent group comparisons of PIMD-Average [02π] averaged over subjects will require inconveniently large sample sizes. However, cross-sectional independent group comparison of averages of within subject difference between baseline and follow-up can be made with reasonable sample sizes. Assuming a loss rate of 0.1 PIMD-Average [02π] per year and 4 visits per year it was found that approximately 18 months follow up is required before a significant change of PIMDAverage [02π] can be observed with a power of 0.8. This is shorter than what has been observed both for HRT measurements and automated perimetry measurements with a similar observation rate. It is concluded that PIMDAverage [02π] has the potential to detect deterioration of glaucoma quicker than currently available primary diagnostic instruments. To increase the efficiency of PIMD-Average [02π] further, the variation among visits within subject has to be reduced.

  16. 40 CFR 80.305 - How are credits generated during the time period 2000 through 2003?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Abt... averaging period. Va = Total volume of gasoline produced during the averaging period at the refinery (or for a foreign refinery, the total volume of gasoline produced during the averaging period at the...

  17. 40 CFR 80.305 - How are credits generated during the time period 2000 through 2003?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Abt... averaging period. Va = Total volume of gasoline produced during the averaging period at the refinery (or for a foreign refinery, the total volume of gasoline produced during the averaging period at the...

  18. Production model in the conditions of unstable demand taking into account the influence of trading infrastructure: Ergodicity and its application

    NASA Astrophysics Data System (ADS)

    Obrosova, N. K.; Shananin, A. A.

    2015-04-01

    A production model with allowance for a working capital deficit and a restricted maximum possible sales volume is proposed and analyzed. The study is motivated by an attempt to analyze the problems of functioning of low competitive macroeconomic structures. The model is formalized in the form of a Bellman equation, for which a closed-form solution is found. The stochastic process of product stock variations is proved to be ergodic and its final probability distribution is found. Expressions for the average production load and the average product stock are found by analyzing the stochastic process. A system of model equations relating the model variables to official statistical parameters is derived. The model is identified using data from the Fiat and KAMAZ companies. The influence of the credit interest rate on the firm market value assessment and the production load level are analyzed using comparative statics methods.

  19. Studies in astronomical time series analysis. IV - Modeling chaotic and random processes with linear filters

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.

    1990-01-01

    While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.

  20. Further analysis of clinical feasibility of OCT-based glaucoma diagnosis with Pigment epithelium central limit- Inner limit of the retina Minimal Distance (PIMD)

    NASA Astrophysics Data System (ADS)

    Söderberg, Per G.; Malmberg, Filip; Sandberg-Melin, Camilla

    2017-02-01

    The present study aimed to elucidate if comparison of angular segments of Pigment epithelium central limit- Inner limit of the retina Minimal Distance, measured over 2π radians in the frontal plane (PIMD-2π) between visits of a patient, renders sufficient precision for detection of loss of nerve fibers in the optic nerve head. An optic nerve head raster scanned cube was captured with a TOPCON 3D OCT 2000 (Topcon, Japan) device in one early to moderate stage glaucoma eye of each of 13 patients. All eyes were recorded at two visits less than 1 month apart. At each visit, 3 volumes were captured. Each volume was extracted from the OCT device for analysis. Then, angular PIMD was segmented three times over 2π radians in the frontal plane, resolved with a semi-automatic algorithm in 500 equally separated steps, PIMD-2π. It was found that individual segmentations within volumes, within visits, within subjects can be phase adjusted to each other in the frontal plane using cross-correlation. Cross correlation was also used to phase adjust volumes within visits within subjects and visits to each other within subjects. Then, PIMD-2π for each subject was split into 250 bundles of 2 adjacent PIMDs. Finally, the sources of variation for estimates of segments of PIMD-2π were derived with analysis of variance assuming a mixed model. The variation among adjacent PIMDS was found very small in relation to the variation among segmentations. The variation among visits was found insignificant in relation to the variation among volumes and the variance for segmentations was found to be on the order of 20 % of that for volumes. The estimated variances imply that, if 3 segmentations are averaged within a volume and at least 10 volumes are averaged within a visit, it is possible to estimate around a 10 % reduction of a PIMD-2π segment from baseline to a subsequent visit as significant. Considering a loss rate for a PIMD-2π segment of 23 μm/yr., 4 visits per year, and averaging 3 segmentations per volume and 3 volumes per visit, a significant reduction from baseline can be detected with a power of 80 % in about 18 months. At higher loss rate for a PIMD-2π segment, a significant difference from baseline can be detected earlier. Averaging over more volumes per visit considerably decreases the time for detection of a significant reduction of a segment of PIMD-2π. Increasing the number of segmentations averaged per visit only slightly reduces the time for detection of a significant reduction. It is concluded that phase adjustment in the frontal plane with cross correlation allows high precision estimates of a segment of PIMD-2π that imply substantially shorter followup time for detection of a significant change than mean deviation (MD) in a visual field estimated with the Humphrey perimeter or neural rim area (NRA) estimated with the Heidelberg retinal tomograph.

  1. Automated volume of interest delineation and rendering of cone beam CT images in interventional cardiology

    NASA Astrophysics Data System (ADS)

    Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael

    2012-02-01

    Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts.

  2. Volume of logging residues in Oregon, Washington, and California—initial results from a 1969-70 study.

    Treesearch

    James O. Howard

    1971-01-01

    A study conducted during 1969-70 in Oregon, Washington, and California indicates that the average net volume of logging residues ranged from 325 to 3,156 cubic feet per acre. The highest volume was on National Forests in the Douglas-fir region, which averaged 2.5 times greater than private lands. The lowest volumes of residue were found in the ponderosa pine region...

  3. Estimation of Bid Curves in Power Exchanges using Time-varying Simultaneous-Equations Models

    NASA Astrophysics Data System (ADS)

    Ofuji, Kenta; Yamaguchi, Nobuyuki

    Simultaneous-equations model (SEM) is generally used in economics to estimate interdependent endogenous variables such as price and quantity in a competitive, equilibrium market. In this paper, we have attempted to apply SEM to JEPX (Japan Electric Power eXchange) spot market, a single-price auction market, using the publicly available data of selling and buying bid volumes, system price and traded quantity. The aim of this analysis is to understand the magnitude of influences to the auctioned prices and quantity from the selling and buying bids, than to forecast prices and quantity for risk management purposes. In comparison with the Ordinary Least Squares (OLS) estimation where the estimation results represent average values that are independent of time, we employ a time-varying simultaneous-equations model (TV-SEM) to capture structural changes inherent in those influences, using State Space models with Kalman filter stepwise estimation. The results showed that the buying bid volumes has that highest magnitude of influences among the factors considered, exhibiting time-dependent changes, ranging as broad as about 240% of its average. The slope of the supply curve also varies across time, implying the elastic property of the supply commodity, while the demand curve remains comparatively inelastic and stable over time.

  4. Variability in Annual and Average Mass Changes in Antarctica from 2004 to 2009 using Satellite Laser Altimetry

    NASA Astrophysics Data System (ADS)

    Babonis, G. S.; Csatho, B. M.; Schenk, A. F.

    2016-12-01

    We present a new record of Antarctic ice thickness changes, reconstructed from ICESat laser altimetry observations, from 2004-2009, at over 100,000 locations across the Antarctic Ice Sheet (AIS). This work generates elevation time series at ICESat groundtrack crossover regions on an observation-by-observation basis, with rigorous, quantified, error estimates using the SERAC approach (Schenk and Csatho, 2012). The results include average and annual elevation, volume and mass changes in Antarctica, fully corrected for glacial isostatic adjustment (GIA) and known intercampaign biases; and partitioned into contributions from surficial processes (e.g. firn densification) and ice dynamics. The modular flexibility of the SERAC framework allows for the assimilation of multiple ancillary datasets (e.g. GIA models, Intercampaign Bias Corrections, IBC), in a common framework, to calculate mass changes for several different combinations of GIA models and IBCs and to arrive at a measure of variability from these results. We are able to determine the effect these corrections have on annual and average volume and mass change calculations in Antarctica, and to explore how these differences vary between drainage basins and with elevation. As such, this contribution presents a method that compliments, and is consistent with, the 2012 Ice sheet Mass Balance Inter-comparison Exercise (IMBIE) results (Shepherd 2012). Additionally, this work will contribute to the 2016 IMBIE, which seeks to reconcile ice sheet mass changes from different observations,, including laser altimetry, using a different methodologies and ancillary datasets including GIA models, Firn Densification Models, and Intercampaign Bias Corrections.

  5. Study of microdosimetric energy deposition patterns in tissue-equivalent medium due to low-energy neutron fields using a graphite-walled proportional counter.

    PubMed

    Waker, A J; Aslam

    2011-06-01

    To improve radiation protection dosimetry for low-energy neutron fields encountered in nuclear power reactor environments, there is increasing interest in modeling neutron energy deposition in metrological instruments such as tissue-equivalent proportional counters (TEPCs). Along with these computational developments, there is also a need for experimental data with which to benchmark and test the results obtained from the modeling methods developed. The experimental work described in this paper is a study of the energy deposition in tissue-equivalent (TE) medium using an in-house built graphite-walled proportional counter (GPC) filled with TE gas. The GPC is a simple model of a standard TEPC because the response of the counter at these energies is almost entirely due to the neutron interactions in the sensitive volume of the counter. Energy deposition in tissue spheres of diameter 1, 2, 4 and 8 µm was measured in low-energy neutron fields below 500 keV. We have observed a continuously increasing trend in microdosimetric averages with an increase in neutron energy. The values of these averages decrease as we increase the simulated diameter at a given neutron energy. A similar trend for these microdosimetric averages has been observed for standard TEPCs and the Rossi-type, TE, spherical wall-less counter filled with propane-based TE gas in the same energy range. This implies that at the microdosimetric level, in the neutron energy range we employed in this study, the pattern of average energy deposited by starter and insider proton recoil events in the gas is similar to those generated cumulatively by crosser and stopper events originating from the counter wall plus starter and insider recoil events originating in the sensitive volume of a TEPC.

  6. Magneto-elastic modeling of composites containing chain-structured magnetostrictive particles

    NASA Astrophysics Data System (ADS)

    Yin, H. M.; Sun, L. Z.; Chen, J. S.

    2006-05-01

    Magneto-elastic behavior is investigated for two-phase composites containing chain-structured magnetostrictive particles under both magnetic and mechanical loading. To derive the local magnetic and elastic fields, three modified Green's functions are derived and explicitly integrated for the infinite domain containing a spherical inclusion with a prescribed magnetization, body force, and eigenstrain. A representative volume element containing a chain of infinite particles is introduced to solve averaged magnetic and elastic fields in the particles and the matrix. Effective magnetostriction of composites is derived by considering the particle's magnetostriction and the magnetic interaction force. It is shown that there exists an optimal choice of the Young's modulus of the matrix and the volume fraction of the particles to achieve the maximum effective magnetostriction. A transversely isotropic effective elasticity is derived at the infinitesimal deformation. Disregarding the interaction term, this model provides the same effective elasticity as Mori-Tanaka's model. Comparisons of model results with the experimental data and other models show the efficacy of the model and suggest that the particle interactions have a considerable effect on the effective magneto-elastic properties of composites even for a low particle volume fraction.

  7. Two-dimensional model of vocal fold vibration for sound synthesis of voice and soprano singing

    NASA Astrophysics Data System (ADS)

    Adachi, Seiji; Yu, Jason

    2005-05-01

    Voiced sounds were simulated with a computer model of the vocal fold composed of a single mass vibrating both parallel and perpendicular to the airflow. Similarities with the two-mass model are found in the amplitudes of the glottal area and the glottal volume flow velocity, the variation in the volume flow waveform with the vocal tract shape, and the dependence of the oscillation amplitude upon the average opening area of the glottis, among other similar features. A few dissimilarities are also found in the more symmetric glottal and volume flow waveforms in the rising and falling phases. The major improvement of the present model over the two-mass model is that it yields a smooth transition between oscillations with an inductive load and a capacitive load of the vocal tract with no sudden jumps in the vibration frequency. Self-excitation is possible both below and above the first formant frequency of the vocal tract. By taking advantage of the wider continuous frequency range, the two-dimensional model can successfully be applied to the sound synthesis of a high-pitched soprano singing, where the fundamental frequency sometimes exceeds the first formant frequency. .

  8. Gallstones and gallbladder cancer-volume and weight of gallstones are associated with gallbladder cancer: a case-control study.

    PubMed

    Roa, Iván; Ibacache, Gilda; Roa, Juan; Araya, Juan; de Aretxabala, Xabier; Muñoz, Sergio

    2006-06-15

    Gallstones are considered the most important risk factor for gallbladder cancer. To identify differences in the number, weight, volume, and density of gallstones associated with chronic cholecystitis (CC), gallbladder dysplasia (GD), and gallbladder cancer (GBC). A total of 125 cases were selected, of which 93 had gallstones associated with GBC and 31 had gallstones associated with GD. The controls were those with CC, matched by sex and age. The number, weight, volume, and density of these gallstones were examined in order to determine differences and relative cancer risk. Number: Multiple gallstones were present in over 76% of cases (GBC and GD) and controls (P = ns). The average number of multiple stones was 21 in GBC versus 14 in controls (P < 0.01). Weight: The average weight of the gallstones was 9.6 g in GBC versus 6.0 g in controls (P = 0.0004). The average weight in multiple stones over 10 g had strong association with GBC (P = 0.0006). Volume: The average volume was 11.7 and 6.48 ml in GBC and controls (P = 0.0002). Average volumes of 6, 8, and 10 ml had a relative cancer risk of 5, 7, and 11 times, respectively. Size: No differences were shown between GBC, GD, and controls. The volume of gallstones associated with other risk factors of GBC may be helpful in prioritizing cholecystectomies in symptomatic patients. Copyright 2006 Wiley-Liss, Inc.

  9. Soft-sphere simulations of a planar shock interaction with a granular bed

    NASA Astrophysics Data System (ADS)

    Stewart, Cameron; Balachandar, S.; McGrath, Thomas P.

    2018-03-01

    Here we consider the problem of shock propagation through a layer of spherical particles. A point particle force model is used to capture the shock-induced aerodynamic force acting upon the particles. The discrete element method (DEM) code liggghts is used to implement the shock-induced force as well as to capture the collisional forces within the system. A volume-fraction-dependent drag correction is applied using Voronoi tessellation to calculate the volume of fluid around each individual particle. A statistically stationary frame is chosen so that spatial and temporal averaging can be performed to calculate ensemble-averaged macroscopic quantities, such as the granular temperature. A parametric study is carried out by varying the coefficient of restitution for three sets of multiphase shock conditions. A self-similar profile is obtained for the granular temperature that is dependent on the coefficient of restitution. A traveling wave structure is observed in the particle concentration downstream of the shock and this instability arises from the volume-fraction-dependent drag force. The intensity of the traveling wave increases significantly as inelastic collisions are introduced. Downstream of the shock, the variance in Voronoi volume fraction is shown to have a strong dependence upon the coefficient of restitution, indicating clustering of particles induced by collisional dissipation. Statistics of the Voronoi volume are computed upstream and downstream of the shock and compared to theoretical results for randomly distributed hard spheres.

  10. Measurement of Crystalline Lens Volume During Accommodation in a Lens Stretcher.

    PubMed

    Marussich, Lauren; Manns, Fabrice; Nankivil, Derek; Maceo Heilman, Bianca; Yao, Yue; Arrieta-Quintero, Esdras; Ho, Arthur; Augusteyn, Robert; Parel, Jean-Marie

    2015-07-01

    To determine if the lens volume changes during accommodation. The study used data acquired on 36 cynomolgus monkey lenses that were stretched in a stepwise fashion to simulate disaccommodation. At each step, stretching force and dioptric power were measured and a cross-sectional image of the lens was acquired using an optical coherence tomography system. Images were corrected for refractive distortions and lens volume was calculated assuming rotational symmetry. The average change in lens volume was calculated and the relation between volume change and power change, and between volume change and stretching force, were quantified. Linear regressions of volume-power and volume-force plots were calculated. The mean (± SD) volume in the unstretched (accommodated) state was 97 ± 8 mm3. On average, there was a small but statistically significant (P = 0.002) increase in measured lens volume with stretching. The mean change in lens volume was +0.8 ± 1.3 mm3. The mean volume-power and volume-load slopes were -0.018 ± 0.058 mm3/D and +0.16 ± 0.40 mm3/g. Lens volume remains effectively constant during accommodation, with changes that are less than 1% on average. This result supports a hypothesis that the change in lens shape with accommodation is accompanied by a redistribution of tissue within the capsular bag without significant compression of the lens contents or fluid exchange through the capsule.

  11. Improved modelling of ship SO 2 emissions—a fuel-based approach

    NASA Astrophysics Data System (ADS)

    Endresen, Øyvind; Bakke, Joachim; Sørgård, Eirik; Flatlandsmo Berglen, Tore; Holmvang, Per

    Significant variations are apparent between the various reported regional and global ship SO 2 emission inventories. Important parameters for SO 2 emission modelling are sulphur contents and marine fuel consumption. Since 1993, the global average sulphur content for heavy fuel has shown an overall downward trend, while the bunker sale has increased. We present an improved bottom up approach to estimate marine sulphur emissions from ship transportation, including the geographical distribution. More than 53,000 individual bunker samples are used to establish regionally and globally (volume) weighted average sulphur contents for heavy and distillate marine fuels. We find that the year 2002 sulphur content in heavy fuels varies regionally from 1.90% (South America) to 3.07% (Asia), with a globally weighted average of 2.68% sulphur. The calculated globally weighted average content for heavy fuels is found to be 5% higher than the average (arithmetic mean) sulphur content commonly used. The reason for this is likely that larger bunker stems are mainly of high-viscosity heavy fuel, which tends to have higher sulphur values compared to lower viscosity fuels. The uncertainties in SO 2 inventories are significantly reduced using our updated SO 2 emission factors (volume-weighted sulphur content). Regional marine bunker sales figures are combined with volume-weighted sulphur contents for each region to give a global SO 2 emission estimate in the range of 5.9-7.2 Tg (SO 2) for international marine transportation. Also taking into account the domestic sales, the total emissions from all ocean-going transportation is estimated to be 7.0-8.5 Tg (SO 2). Our estimate is significantly lower than recent global estimate reported by Corbett and Koehler [2003. Journal of Geophysical Research: Atmospheres 108] (6.49 Tg S or about 13.0 Tg SO 2). Endresen et al. [2004. Journal of Geophysical Research 109, D23302] claim that uncertainties in input data for the activity-based method will give too high emission estimates. We also indicate that this higher estimate will almost give doubling of regional emissions, compared to detailed movement-based estimates. The paper presents an alternative approach to estimate present overall SO 2 ship emissions with improved accuracy.

  12. Incorporation of an Energy Equation into a Pulsed Inductive Thruster Performance Model

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.; Reneau, Jarred P.; Sankaran, Kameshwaran

    2011-01-01

    A model for pulsed inductive plasma acceleration containing an energy equation to account for the various sources and sinks in such devices is presented. The model consists of a set of circuit equations coupled to an equation of motion and energy equation for the plasma. The latter two equations are obtained for the plasma current sheet by treating it as a one-element finite volume, integrating the equations over that volume, and then matching known terms or quantities already calculated in the model to the resulting current sheet-averaged terms in the equations. Calculations showing the time-evolution of the various sources and sinks in the system are presented to demonstrate the efficacy of the model, with two separate resistivity models employed to show an example of how the plasma transport properties can affect the calculation. While neither resistivity model is fully accurate, the demonstration shows that it is possible within this modeling framework to time-accurately update various plasma parameters.

  13. Toward an Accurate Theoretical Framework for Describing Ensembles for Proteins under Strongly Denaturing Conditions

    PubMed Central

    Tran, Hoang T.; Pappu, Rohit V.

    2006-01-01

    Our focus is on an appropriate theoretical framework for describing highly denatured proteins. In high concentrations of denaturants, proteins behave like polymers in a good solvent and ensembles for denatured proteins can be modeled by ignoring all interactions except excluded volume (EV) effects. To assay conformational preferences of highly denatured proteins, we quantify a variety of properties for EV-limit ensembles of 23 two-state proteins. We find that modeled denatured proteins can be best described as follows. Average shapes are consistent with prolate ellipsoids. Ensembles are characterized by large correlated fluctuations. Sequence-specific conformational preferences are restricted to local length scales that span five to nine residues. Beyond local length scales, chain properties follow well-defined power laws that are expected for generic polymers in the EV limit. The average available volume is filled inefficiently, and cavities of all sizes are found within the interiors of denatured proteins. All properties characterized from simulated ensembles match predictions from rigorous field theories. We use our results to resolve between conflicting proposals for structure in ensembles for highly denatured states. PMID:16766618

  14. SU-E-T-628: Predicted Risk of Post-Irradiation Cerebral Necrosis in Pediatric Brain Cancer Patients: A Treatment Planning Comparison of Proton Vs. Photon Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freund, D; Zhang, R; Sanders, M

    Purpose: Post-irradiation cerebral necrosis (PICN) is a severe late effect that can Result from brain cancers treatment using radiation therapy. The purpose of this study was to compare the treatment plans and predicted risk of PICN after volumetric modulated arc therapy (VMAT) to the risk after passively scattered proton therapy (PSPT) and intensity modulated proton therapy (IMPT) in a cohort of pediatric patients. Methods: Thirteen pediatric patients with varying age and sex were selected for this study. A clinical treatment volume (CTV) was constructed for 8 glioma patients and 5 ependymoma patients. Prescribed dose was 54 Gy over 30 fractionsmore » to the planning volume. Dosimetric endpoints were compared between VMAT and proton plans. The normal tissue complication probability (NTCP) following VMAT and proton therapy planning was also calculated using PICN as the biological endpoint. Sensitivity tests were performed to determine if predicted risk of PICN was sensitive to positional errors, proton range errors and selection of risk models. Results: Both PSPT and IMPT plans resulted in a significant increase in the maximum dose and reduction in the total brain volume irradiated to low doses compared with the VMAT plans. The average ratios of NTCP between PSPT and VMAT were 0.56 and 0.38 for glioma and ependymoma patients respectively and the average ratios of NTCP between IMPT and VMAT were 0.67 and 0.68 for glioma and ependymoma plans respectively. Sensitivity test revealed that predicted ratios of risk were insensitive to range and positional errors but varied with risk model selection. Conclusion: Both PSPT and IMPT plans resulted in a decrease in the predictive risk of necrosis for the pediatric plans studied in this work. Sensitivity analysis upheld the qualitative findings of the risk models used in this study, however more accurate models that take into account dose and volume are needed.« less

  15. Effect of Fuel Temperature Profile on Eigenvalue Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greifenkamp, Tom E; Clarno, Kevin T; Gehin, Jess C

    2008-01-01

    Use of an average fuel temperature is a current practice when modeling fuel for eigenvalue (k-inf) calculations. This is an approximation, as it is known from Heat-transfer methods that a fuel pin having linear power q', will have a temperature that varies radially and has a maximum temperature at the center line [1]. This paper describes an investigation into the effects on k-inf and isotopic concentrations of modeling a fuel pin using a single average temperature versus a radially varying fuel temperature profile. The axial variation is not discussed in this paper. A single fuel pin was modeled having 1,more » 3, 5, 8, or 10 regions of equal volumes (areas). Fig. 1 shows a model of a 10-ring fuel pin surrounded by a gap and then cladding.« less

  16. Towards the Truly Predictive 3D Modeling of Recrystallization and Grain Growth in Advanced Technical Alloys

    DTIC Science & Technology

    2010-06-11

    MODELING WITH IMPLEMENTED GBI AND MD DATA (STEADY STATE GB MIGRATION) PAGE 48 5. FORMATION AND ANALYSIS OF GB PROPERTIES DATABASE PAGE 53 5.1...Relative GB energy for specified GBM averaged on possible GBIs PAGE 53 5.2. Database validation on available experimental data PAGE 56 5.3. Comparison...PAGE 70 Fig. 6.11. MC Potts Rex. and GG software: (a) modeling volume analysis; (b) searching for GB energy value within included database . PAGE

  17. Determination of the turbulence integral model parameters for a case of a coolant angular flow in regular rod-bundle

    NASA Astrophysics Data System (ADS)

    Bayaskhalanov, M. V.; Vlasov, M. N.; Korsun, A. S.; Merinov, I. G.; Philippov, M. Ph

    2017-11-01

    Research results of “k-ε” turbulence integral model (TIM) parameters dependence on the angle of a coolant flow in regular smooth cylindrical rod-bundle are presented. TIM is intended for the definition of efficient impulse and heat transport coefficients in the averaged equations of a heat and mass transfer in the regular rod structures in an anisotropic porous media approximation. The TIM equations are received by volume-averaging of the “k-ε” turbulence model equations on periodic cell of rod-bundle. The water flow across rod-bundle under angles from 15 to 75 degrees was simulated by means of an ANSYS CFX code. Dependence of the TIM parameters on flow angle was as a result received.

  18. The GeoClaw software for depth-averaged flows with adaptive refinement

    USGS Publications Warehouse

    Berger, M.J.; George, D.L.; LeVeque, R.J.; Mandli, Kyle T.

    2011-01-01

    Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-averaged equations, of which the shallow water equations are the simplest example. We describe the GeoClaw software that has been designed to solve problems of this nature, consisting of open source Fortran programs together with Python tools for the user interface and flow visualization. This software uses high-resolution shock-capturing finite volume methods on logically rectangular grids, including latitude-longitude grids on the sphere. Dry states are handled automatically to model inundation. The code incorporates adaptive mesh refinement to allow the efficient solution of large-scale geophysical problems. Examples are given illustrating its use for modeling tsunamis and dam-break flooding problems. Documentation and download information is available at www.clawpack.org/geoclaw. ?? 2011.

  19. Action-based Dynamical Modeling for the Milky Way Disk: The Influence of Spiral Arms

    NASA Astrophysics Data System (ADS)

    Trick, Wilma H.; Bovy, Jo; D'Onghia, Elena; Rix, Hans-Walter

    2017-04-01

    RoadMapping is a dynamical modeling machinery developed to constrain the Milky Way’s (MW) gravitational potential by simultaneously fitting an axisymmetric parametrized potential and an action-based orbit distribution function (DF) to discrete 6D phase-space measurements of stars in the Galactic disk. In this work, we demonstrate RoadMapping's robustness in the presence of spiral arms by modeling data drawn from an N-body simulation snapshot of a disk-dominated galaxy of MW mass with strong spiral arms (but no bar), exploring survey volumes with radii 500 {pc}≤slant {r}\\max ≤slant 5 {kpc}. The potential constraints are very robust, even though we use a simple action-based DF, the quasi-isothermal DF. The best-fit RoadMapping model always recovers the correct gravitational forces where most of the stars that entered the analysis are located, even for small volumes. For data from large survey volumes, RoadMapping finds axisymmetric models that average well over the spiral arms. Unsurprisingly, the models are slightly biased by the excess of stars in the spiral arms. Gravitational potential models derived from survey volumes with at least {r}\\max =3 {kpc} can be reliably extrapolated to larger volumes. However, a large radial survey extent, {r}\\max ˜ 5 {kpc}, is needed to correctly recover the halo scale length. In general, the recovery and extrapolability of potentials inferred from data sets that were drawn from inter-arm regions appear to be better than those of data sets drawn from spiral arms. Our analysis implies that building axisymmetric models for the Galaxy with upcoming Gaia data will lead to sensible and robust approximations of the MW’s potential.

  20. Ex Vivo Liver Experiment of Hydrochloric Acid-Infused and Saline-Infused Monopolar Radiofrequency Ablation: Better Outcomes in Temperature, Energy, and Coagulation.

    PubMed

    Jiang, Xiong-ying; Gu, Yang-kui; Huang, Jin-hua; Gao, Fei; Zou, Ru-hai; Zhang, Tian-qi

    2016-04-01

    To compare temperature, energy, and coagulation between hydrochloric acid-infused radiofrequency ablation (HAIRFA) and normal saline-infused radiofrequency ablation (NSIRFA) in ex vivo porcine liver model. 30 fresh porcine livers were excised in 60 lesions, 30 with HAIRFA and the other 30 with NSIRFA. Both modalities used monopolar perfusion electrode connected to a RF generator set at 103 °C and 30 W. In each group, ablation time was set at 10, 20, or 30 min (10 lesions from each group at each time). We compared tissue temperatures (at 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0 cm away from the electrode tip), average power, deposited energy, deposited energy per coagulation volume (DEV), coagulation diameters, coagulative volume, and spherical ratio between the two groups. Temperature-time curves showed that HAIRFA provided progressively greater heating than that of NSIRFA. At 30 min, mean average power, deposited energy, coagulation volumes (113.67 vs. 12.28 cm(3)) and diameters, and increasing in tissue temperature were much greater with HAIRFA (P < 0.001 for all), except DEV was lower (456 vs. 1396 J/cm(3), P < 0.001). The spherical ratio was closer to 1 with HAIRFA (1.23 vs. 1.46). Coagulation diameters, volume, and average power of HAIRFA increased significantly with longer ablation times. While with NSIRFA, these characteristics were stable till later 20 min, except the power decreased with longer ablation times. HAIRFA creates much larger and more spherical lesions by increasing overall energy deposition, modulating thermal conductivity, and transferring heat during ablation.

  1. Incorporating pushing in exclusion-process models of cell migration.

    PubMed

    Yates, Christian A; Parker, Andrew; Baker, Ruth E

    2015-05-01

    The macroscale movement behavior of a wide range of isolated migrating cells has been well characterized experimentally. Recently, attention has turned to understanding the behavior of cells in crowded environments. In such scenarios it is possible for cells to interact, inducing neighboring cells to move in order to make room for their own movements or progeny. Although the behavior of interacting cells has been modeled extensively through volume-exclusion processes, few models, thus far, have explicitly accounted for the ability of cells to actively displace each other in order to create space for themselves. In this work we consider both on- and off-lattice volume-exclusion position-jump processes in which cells are explicitly allowed to induce movements in their near neighbors in order to create space for themselves to move or proliferate into. We refer to this behavior as pushing. From these simple individual-level representations we derive continuum partial differential equations for the average occupancy of the domain. We find that, for limited amounts of pushing, comparison between the averaged individual-level simulations and the population-level model is nearly as good as in the scenario without pushing. Interestingly, we find that, in the on-lattice case, the diffusion coefficient of the population-level model is increased by pushing, whereas, for the particular off-lattice model that we investigate, the diffusion coefficient is reduced. We conclude, therefore, that it is important to consider carefully the appropriate individual-level model to use when representing complex cell-cell interactions such as pushing.

  2. Kinetics of the B1-B2 phase transition in KCl under rapid compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Chuanlong; Smith, Jesse S.; Sinogeikin, Stanislav V.

    2016-01-28

    Kinetics of the B1-B2 phase transition in KCl has been investigated under various compression rates (0.03–13.5 GPa/s) in a dynamic diamond anvil cell using time-resolved x-ray diffraction and fast imaging. Our experimental data show that the volume fraction across the transition generally gives sigmoidal curves as a function of pressure during rapid compression. Based upon classical nucleation and growth theories (Johnson-Mehl-Avrami-Kolmogorov theories), we propose a model that is applicable for studying kinetics for the compression rates studied. The fit of the experimental volume fraction as a function of pressure provides information on effective activation energy and average activation volume at amore » given compression rate. The resulting parameters are successfully used for interpreting several experimental observables that are compression-rate dependent, such as the transition time, grain size, and over-pressurization. The effective activation energy (Q{sub eff}) is found to decrease linearly with the logarithm of compression rate. When Q{sub eff} is applied to the Arrhenius equation, this relationship can be used to interpret the experimentally observed linear relationship between the logarithm of the transition time and logarithm of the compression rates. The decrease of Q{sub eff} with increasing compression rate results in the decrease of the nucleation rate, which is qualitatively in agreement with the observed change of the grain size with compression rate. The observed over-pressurization is also well explained by the model when an exponential relationship between the average activation volume and the compression rate is assumed.« less

  3. Evaluation of procedures for prediction of unconventional gas in the presence of geologic trends

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.

    2009-01-01

    This study extends the application of local spatial nonparametric prediction models to the estimation of recoverable gas volumes in continuous-type gas plays to regimes where there is a single geologic trend. A transformation is presented, originally proposed by Tomczak, that offsets the distortions caused by the trend. This article reports on numerical experiments that compare predictive and classification performance of the local nonparametric prediction models based on the transformation with models based on Euclidean distance. The transformation offers improvement in average root mean square error when the trend is not severely misspecified. Because of the local nature of the models, even those based on Euclidean distance in the presence of trends are reasonably robust. The tests based on other model performance metrics such as prediction error associated with the high-grade tracts and the ability of the models to identify sites with the largest gas volumes also demonstrate the robustness of both local modeling approaches. ?? International Association for Mathematical Geology 2009.

  4. SU-F-R-34: Quantitative Perfusion Measurement in Rectal Cancer Using Three Different Pharmacokinetic Models: Implications for Prospective Study Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie, K; Yue, N; Jabbour, S

    Purpose: To compare three different pharmacokinetic models for analysis of dynamic-contrast-enhanced (DCE)-CT data with respect to different acquisition times and location of region of interest. Methods: Eight rectal cancer patients with pre-treatment DCE-CTs were included. The dynamic sequence started 4–10seconds(s) after the injection of contrast agent. The scan included a 110s acquisition with intervals of 40×1s+15×3s+4×6s. An experienced oncologist outlined the tumor region. Hotspots with top-5%-enhancement were also identified. Pharmacokinetic analysis was performed using three different models: deconvolution method, Patlak model, and modified Toft’s model. Perfusion parameters as blood flow (BF), blood volume (BV), mean transit time (MTT), permeability-surface-area-product (PS),more » volume transfer constant (Ktrans), and flux rate constant (Kep), were compared with respect to different acquisition times of 45s, 65s, 85s and 105s. Both hotspot and whole-volume variances were also assessed. The differences were compared using the Wilcoxon matched-pairs test and Bland-Altman plots. Results: Moderate correlation was observed for various perfusion parameters (r=0.56–0.72, p<0.0001) but the Wilcoxon test revealed a significant difference among the three models (P < .001). Significant differences in PS were noted between acquisitions of 45s versus longer time of 85s or 105s (p<0.05) using Patlak but not with the deconvolution method. In addition, measurements varied substantially between whole-volume vs. hotspot analysis. Conclusion: The radiation dose of DCE-CT was on average 1.5 times of an abdomen/pelvic CT, which is not insubstantial. To take the DCE-CT forward as a biomarker in oncology, prospective studies should be carefully designed with the optimal image acquisition and analysis technique. Our study suggested that: (1) different kinetic models are not interchangeable; (2) a 45s acquisition might not be sufficient for reliable permeability measurement in rectal cancer using Patlak model, but might be achievable using deconvolution method; and (3) local variations existed inside the tumor, and both whole-volume-averaged and local-heterogeneity analysis is recommended for future quantitative studies. This work is supported by the National High-tech R&D program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917), Natural Science Foundation of China (NSFC Grant No. 81201091).« less

  5. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.

    1994-01-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon, which affects both geophysical and astrophysical turbulence (e.g., oceanic structure and convective overshooting in stars), has been singularly difficult to account for in turbulence modeling. For example, the widely used model of Deardorff has not been confirmed by recent LES results. As of today, there is no SGS model capable of incorporating buoyancy, rotation, shear, anistropy, and stable stratification (gravity waves). In this paper, we construct such a model which we call CM (complete model). We also present a hierarchy of simpler algebraic models (called AM) of varying complexity. Finally, we present a set of models which are simplified even further (called SM), the simplest of which is the Smagorinsky-Lilly model. The incorporation of these models into the presently available LES codes should begin with the SM, to be followed by the AM and finally by the CM.

  6. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    NASA Astrophysics Data System (ADS)

    Canuto, V. M.

    1994-06-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 108 for the planetary boundary layer and Re approximately equals 1014 for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re9/4 exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon, which affects both geophysical and astrophysical turbulence (e.g., oceanic structure and convective overshooting in stars), has been singularly difficult to account for in turbulence modeling. For example, the widely used model of Deardorff has not been confirmed by recent LES results. As of today, there is no SGS model capable of incorporating buoyancy, rotation, shear, anistropy, and stable stratification (gravity waves). In this paper, we construct such a model which we call CM (complete model). We also present a hierarchy of simpler algebraic models (called AM) of varying complexity. Finally, we present a set of models which are simplified even further (called SM), the simplest of which is the Smagorinsky-Lilly model. The incorporation of these models into the presently available LES codes should begin with the SM, to be followed by the AM and finally by the CM.

  7. Liquidity crisis, granularity of the order book and price fluctuations

    NASA Astrophysics Data System (ADS)

    Cristelli, M.; Alfi, V.; Pietronero, L.; Zaccaria, A.

    2010-01-01

    We introduce a microscopic model for the dynamics of the order book to study how the lack of liquidity influences price fluctuations. We use the average density of the stored orders (granularity g) as a proxy for liquidity. This leads to a Price Impact Surface which depends on both volume ω and g. The dependence on the volume (averaged over the granularity) of the Price Impact Surface is found to be a concave power law function <φ(ω,g)>g ˜ ωδ with δ ≈ 0.59. Instead the dependence on the granularity is φ(ω,g|ω) ˜ gα with α ≈ -1, showing a divergence of price fluctuations in the limit g → 0. Moreover, even in intermediate situations of finite liquidity, this effect can be very large and it is a natural candidate for understanding the origin of large price fluctuations.

  8. A Novel Modelling Approach for Predicting Forest Growth and Yield under Climate Change.

    PubMed

    Ashraf, M Irfan; Meng, Fan-Rui; Bourque, Charles P-A; MacLean, David A

    2015-01-01

    Global climate is changing due to increasing anthropogenic emissions of greenhouse gases. Forest managers need growth and yield models that can be used to predict future forest dynamics during the transition period of present-day forests under a changing climatic regime. In this study, we developed a forest growth and yield model that can be used to predict individual-tree growth under current and projected future climatic conditions. The model was constructed by integrating historical tree growth records with predictions from an ecological process-based model using neural networks. The new model predicts basal area (BA) and volume growth for individual trees in pure or mixed species forests. For model development, tree-growth data under current climatic conditions were obtained using over 3000 permanent sample plots from the Province of Nova Scotia, Canada. Data to reflect tree growth under a changing climatic regime were projected with JABOWA-3 (an ecological process-based model). Model validation with designated data produced model efficiencies of 0.82 and 0.89 in predicting individual-tree BA and volume growth. Model efficiency is a relative index of model performance, where 1 indicates an ideal fit, while values lower than zero means the predictions are no better than the average of the observations. Overall mean prediction error (BIAS) of basal area and volume growth predictions was nominal (i.e., for BA: -0.0177 cm(2) 5-year(-1) and volume: 0.0008 m(3) 5-year(-1)). Model variability described by root mean squared error (RMSE) in basal area prediction was 40.53 cm(2) 5-year(-1) and 0.0393 m(3) 5-year(-1) in volume prediction. The new modelling approach has potential to reduce uncertainties in growth and yield predictions under different climate change scenarios. This novel approach provides an avenue for forest managers to generate required information for the management of forests in transitional periods of climate change. Artificial intelligence technology has substantial potential in forest modelling.

  9. A Novel Modelling Approach for Predicting Forest Growth and Yield under Climate Change

    PubMed Central

    Ashraf, M. Irfan; Meng, Fan-Rui; Bourque, Charles P.-A.; MacLean, David A.

    2015-01-01

    Global climate is changing due to increasing anthropogenic emissions of greenhouse gases. Forest managers need growth and yield models that can be used to predict future forest dynamics during the transition period of present-day forests under a changing climatic regime. In this study, we developed a forest growth and yield model that can be used to predict individual-tree growth under current and projected future climatic conditions. The model was constructed by integrating historical tree growth records with predictions from an ecological process-based model using neural networks. The new model predicts basal area (BA) and volume growth for individual trees in pure or mixed species forests. For model development, tree-growth data under current climatic conditions were obtained using over 3000 permanent sample plots from the Province of Nova Scotia, Canada. Data to reflect tree growth under a changing climatic regime were projected with JABOWA-3 (an ecological process-based model). Model validation with designated data produced model efficiencies of 0.82 and 0.89 in predicting individual-tree BA and volume growth. Model efficiency is a relative index of model performance, where 1 indicates an ideal fit, while values lower than zero means the predictions are no better than the average of the observations. Overall mean prediction error (BIAS) of basal area and volume growth predictions was nominal (i.e., for BA: -0.0177 cm2 5-year-1 and volume: 0.0008 m3 5-year-1). Model variability described by root mean squared error (RMSE) in basal area prediction was 40.53 cm2 5-year-1 and 0.0393 m3 5-year-1 in volume prediction. The new modelling approach has potential to reduce uncertainties in growth and yield predictions under different climate change scenarios. This novel approach provides an avenue for forest managers to generate required information for the management of forests in transitional periods of climate change. Artificial intelligence technology has substantial potential in forest modelling. PMID:26173081

  10. The Thermodynamic Limit in Mean Field Spin Glass Models

    NASA Astrophysics Data System (ADS)

    Guerra, Francesco; Toninelli, Fabio Lucio

    We present a simple strategy in order to show the existence and uniqueness of the infinite volume limit of thermodynamic quantities, for a large class of mean field disordered models, as for example the Sherrington-Kirkpatrick model, and the Derrida p-spin model. The main argument is based on a smooth interpolation between a large system, made of N spin sites, and two similar but independent subsystems, made of N1 and N2 sites, respectively, with N1+N2=N. The quenched average of the free energy turns out to be subadditive with respect to the size of the system. This gives immediately convergence of the free energy per site, in the infinite volume limit. Moreover, a simple argument, based on concentration of measure, gives the almost sure convergence, with respect to the external noise. Similar results hold also for the ground state energy per site.

  11. Forecasting models for flow and total dissolved solids in Karoun river-Iran

    NASA Astrophysics Data System (ADS)

    Salmani, Mohammad Hassan; Salmani Jajaei, Efat

    2016-04-01

    Water quality is one of the most important factors contributing to a healthy life. From the water quality management point of view, TDS (total dissolved solids) is the most important factor and many water developing plans have been implemented in recognition of this factor. However, these plans have not been perfect and very successful in overcoming the poor water quality problem, so there are a good volume of related studies in the literature. We study TDS and the water flow of the Karoun river in southwest Iran. We collected the necessary time series data from the Harmaleh station located in the river. We present two Univariate Seasonal Autoregressive Integrated Movement Average (ARIMA) models to forecast TDS and water flow in this river. Then, we build up a Transfer Function (TF) model to formulate the TDS as a function of water flow volume. A performance comparison between the Seasonal ARIMA and the TF models are presented.

  12. Increased fMRI Sensitivity at Equal Data Burden Using Averaged Shifted Echo Acquisition

    PubMed Central

    Witt, Suzanne T.; Warntjes, Marcel; Engström, Maria

    2016-01-01

    There is growing evidence as to the benefits of collecting BOLD fMRI data with increased sampling rates. However, many of the newly developed acquisition techniques developed to collect BOLD data with ultra-short TRs require hardware, software, and non-standard analytic pipelines that may not be accessible to all researchers. We propose to incorporate the method of shifted echo into a standard multi-slice, gradient echo EPI sequence to achieve a higher sampling rate with a TR of <1 s with acceptable spatial resolution. We further propose to incorporate temporal averaging of consecutively acquired EPI volumes to both ameliorate the reduced temporal signal-to-noise inherent in ultra-fast EPI sequences and reduce the data burden. BOLD data were collected from 11 healthy subjects performing a simple, event-related visual-motor task with four different EPI sequences: (1) reference EPI sequence with TR = 1440 ms, (2) shifted echo EPI sequence with TR = 700 ms, (3) shifted echo EPI sequence with every two consecutively acquired EPI volumes averaged and effective TR = 1400 ms, and (4) shifted echo EPI sequence with every four consecutively acquired EPI volumes averaged and effective TR = 2800 ms. Both the temporally averaged sequences exhibited increased temporal signal-to-noise over the shifted echo EPI sequence. The shifted echo sequence with every two EPI volumes averaged also had significantly increased BOLD signal change compared with the other three sequences, while the shifted echo sequence with every four EPI volumes averaged had significantly decreased BOLD signal change compared with the other three sequences. The results indicated that incorporating the method of shifted echo into a standard multi-slice EPI sequence is a viable method for achieving increased sampling rate for collecting event-related BOLD data. Further, consecutively averaging every two consecutively acquired EPI volumes significantly increased the measured BOLD signal change and the subsequently calculated activation map statistics. PMID:27932947

  13. Tritium volume activity in the Baltic Sea in 1987-1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Styro, D.B.; Korotkov, V.P.

    Tritium volume activities measured in the Baltic Sea are summarized in this paper. Activity levels were determined by the liquid scintillation method with a LS-1000 counter. The field investigations showed that the tritium volume activity in the Baltic Sea can change substantially in absolute magnitude. Therefore, average volume activity is used as an indicator of natural content. Correlations between calculated (averaged) tritium activity levels and the Chernobyl accident are very briefly discussed. 7 refs., 2 figs., 1 tab.

  14. Estimation of the sensitive volume for gravitational-wave source populations using weighted Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Tiwari, Vaibhav

    2018-07-01

    The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.

  15. The sudden coalescene model of the boiling crisis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrica, P.M.; Clausse, A.

    1995-09-01

    A local two-phase flow integral model of nucleate boiling and crisis is presented. The model is based on average balances on a control volume, yielding to a set of three nonlinear differential equations for the local void fraction, bubble number density and velocity. Boiling crisis as critical heat flux is interpreted as a dynamic transition caused by the coalescence of bubbles near the heater. The theoretical dynamic model is compared with experimental results obtained for linear power ramps in a horizontal plate heater in R-113, showing an excellent qualitative agreement.

  16. Geostatistics and the representative elementary volume of gamma ray tomography attenuation in rocks cores

    USGS Publications Warehouse

    Vogel, J.R.; Brown, G.O.

    2003-01-01

    Semivariograms of samples of Culebra Dolomite have been determined at two different resolutions for gamma ray computed tomography images. By fitting models to semivariograms, small-scale and large-scale correlation lengths are determined for four samples. Different semivariogram parameters were found for adjacent cores at both resolutions. Relative elementary volume (REV) concepts are related to the stationarity of the sample. A scale disparity factor is defined and is used to determine sample size required for ergodic stationarity with a specified correlation length. This allows for comparison of geostatistical measures and representative elementary volumes. The modifiable areal unit problem is also addressed and used to determine resolution effects on correlation lengths. By changing resolution, a range of correlation lengths can be determined for the same sample. Comparison of voxel volume to the best-fit model correlation length of a single sample at different resolutions reveals a linear scaling effect. Using this relationship, the range of the point value semivariogram is determined. This is the range approached as the voxel size goes to zero. Finally, these results are compared to the regularization theory of point variables for borehole cores and are found to be a better fit for predicting the volume-averaged range.

  17. Calm water resistance prediction of a bulk carrier using Reynolds averaged Navier-Stokes based solver

    NASA Astrophysics Data System (ADS)

    Rahaman, Md. Mashiur; Islam, Hafizul; Islam, Md. Tariqul; Khondoker, Md. Reaz Hasan

    2017-12-01

    Maneuverability and resistance prediction with suitable accuracy is essential for optimum ship design and propulsion power prediction. This paper aims at providing some of the maneuverability characteristics of a Japanese bulk carrier model, JBC in calm water using a computational fluid dynamics solver named SHIP Motion and OpenFOAM. The solvers are based on the Reynolds average Navier-Stokes method (RaNS) and solves structured grid using the Finite Volume Method (FVM). This paper comprises the numerical results of calm water test for the JBC model with available experimental results. The calm water test results include the total drag co-efficient, average sinkage, and trim data. Visualization data for pressure distribution on the hull surface and free water surface have also been included. The paper concludes that the presented solvers predict the resistance and maneuverability characteristics of the bulk carrier with reasonable accuracy utilizing minimum computational resources.

  18. Measurement of Crystalline Lens Volume During Accommodation in a Lens Stretcher

    PubMed Central

    Marussich, Lauren; Manns, Fabrice; Nankivil, Derek; Maceo Heilman, Bianca; Yao, Yue; Arrieta-Quintero, Esdras; Ho, Arthur; Augusteyn, Robert; Parel, Jean-Marie

    2015-01-01

    Purpose To determine if the lens volume changes during accommodation. Methods The study used data acquired on 36 cynomolgus monkey lenses that were stretched in a stepwise fashion to simulate disaccommodation. At each step, stretching force and dioptric power were measured and a cross-sectional image of the lens was acquired using an optical coherence tomography system. Images were corrected for refractive distortions and lens volume was calculated assuming rotational symmetry. The average change in lens volume was calculated and the relation between volume change and power change, and between volume change and stretching force, were quantified. Linear regressions of volume-power and volume-force plots were calculated. Results The mean (±SD) volume in the unstretched (accommodated) state was 97 ± 8 mm3. On average, there was a small but statistically significant (P = 0.002) increase in measured lens volume with stretching. The mean change in lens volume was +0.8 ± 1.3 mm3. The mean volume-power and volume-load slopes were −0.018 ± 0.058 mm3/D and +0.16 ± 0.40 mm3/g. Conclusions Lens volume remains effectively constant during accommodation, with changes that are less than 1% on average. This result supports a hypothesis that the change in lens shape with accommodation is accompanied by a redistribution of tissue within the capsular bag without significant compression of the lens contents or fluid exchange through the capsule. PMID:26161985

  19. Impacts of future climate change on urban flood volumes in Hohhot in northern China: benefits of climate change mitigation and adaptations

    NASA Astrophysics Data System (ADS)

    Zhou, Qianqian; Leng, Guoyong; Huang, Maoyi

    2018-01-01

    As China becomes increasingly urbanised, flooding has become a regular occurrence in its major cities. Assessing the effects of future climate change on urban flood volumes is crucial to informing better management of such disasters given the severity of the devastating impacts of flooding (e.g. the 2016 flooding events across China). Although recent studies have investigated the impacts of future climate change on urban flooding, the effects of both climate change mitigation and adaptation have rarely been accounted for together in a consistent framework. In this study, we assess the benefits of mitigating climate change by reducing greenhouse gas (GHG) emissions and locally adapting to climate change by modifying drainage systems to reduce urban flooding under various climate change scenarios through a case study conducted in northern China. The urban drainage model - Storm Water Management Model - was used to simulate urban flood volumes using current and two adapted drainage systems (i.e. pipe enlargement and low-impact development, LID), driven by bias-corrected meteorological forcing from five general circulation models in the Coupled Model Intercomparison Project Phase 5 archive. Results indicate that urban flood volume is projected to increase by 52 % over 2020-2040 compared to the volume in 1971-2000 under the business-as-usual scenario (i.e. Representative Concentration Pathway (RCP) 8.5). The magnitudes of urban flood volumes are found to increase nonlinearly with changes in precipitation intensity. On average, the projected flood volume under RCP 2.6 is 13 % less than that under RCP 8.5, demonstrating the benefits of global-scale climate change mitigation efforts in reducing local urban flood volumes. Comparison of reduced flood volumes between climate change mitigation and local adaptation (by improving drainage systems) scenarios suggests that local adaptation is more effective than climate change mitigation in reducing future flood volumes. This has broad implications for the research community relative to drainage system design and modelling in a changing environment. This study highlights the importance of accounting for local adaptation when coping with future urban floods.

  20. Impacts of future climate change on urban flood volumes in Hohhot in northern China: benefits of climate change mitigation and adaptations

    DOE PAGES

    Zhou, Qianqian; Leng, Guoyong; Huang, Maoyi

    2018-01-15

    As China becomes increasingly urbanised, flooding has become a regular occurrence in its major cities. Assessing the effects of future climate change on urban flood volumes is crucial to informing better management of such disasters given the severity of the devastating impacts of flooding (e.g. the 2016 flooding events across China). Although recent studies have investigated the impacts of future climate change on urban flooding, the effects of both climate change mitigation and adaptation have rarely been accounted for together in a consistent framework. In this study, we assess the benefits of mitigating climate change by reducing greenhouse gas (GHG)more » emissions and locally adapting to climate change by modifying drainage systems to reduce urban flooding under various climate change scenarios through a case study conducted in northern China. The urban drainage model – Storm Water Management Model – was used to simulate urban flood volumes using current and two adapted drainage systems (i.e. pipe enlargement and low-impact development, LID), driven by bias-corrected meteorological forcing from five general circulation models in the Coupled Model Intercomparison Project Phase 5 archive. Results indicate that urban flood volume is projected to increase by 52 % over 2020–2040 compared to the volume in 1971–2000 under the business-as-usual scenario (i.e. Representative Concentration Pathway (RCP) 8.5). The magnitudes of urban flood volumes are found to increase nonlinearly with changes in precipitation intensity. On average, the projected flood volume under RCP 2.6 is 13 % less than that under RCP 8.5, demonstrating the benefits of global-scale climate change mitigation efforts in reducing local urban flood volumes. Comparison of reduced flood volumes between climate change mitigation and local adaptation (by improving drainage systems) scenarios suggests that local adaptation is more effective than climate change mitigation in reducing future flood volumes. This has broad implications for the research community relative to drainage system design and modelling in a changing environment. Furthermore, this study highlights the importance of accounting for local adaptation when coping with future urban floods.« less

  1. Impacts of future climate change on urban flood volumes in Hohhot in northern China: benefits of climate change mitigation and adaptations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Qianqian; Leng, Guoyong; Huang, Maoyi

    As China becomes increasingly urbanised, flooding has become a regular occurrence in its major cities. Assessing the effects of future climate change on urban flood volumes is crucial to informing better management of such disasters given the severity of the devastating impacts of flooding (e.g. the 2016 flooding events across China). Although recent studies have investigated the impacts of future climate change on urban flooding, the effects of both climate change mitigation and adaptation have rarely been accounted for together in a consistent framework. In this study, we assess the benefits of mitigating climate change by reducing greenhouse gas (GHG)more » emissions and locally adapting to climate change by modifying drainage systems to reduce urban flooding under various climate change scenarios through a case study conducted in northern China. The urban drainage model – Storm Water Management Model – was used to simulate urban flood volumes using current and two adapted drainage systems (i.e. pipe enlargement and low-impact development, LID), driven by bias-corrected meteorological forcing from five general circulation models in the Coupled Model Intercomparison Project Phase 5 archive. Results indicate that urban flood volume is projected to increase by 52 % over 2020–2040 compared to the volume in 1971–2000 under the business-as-usual scenario (i.e. Representative Concentration Pathway (RCP) 8.5). The magnitudes of urban flood volumes are found to increase nonlinearly with changes in precipitation intensity. On average, the projected flood volume under RCP 2.6 is 13 % less than that under RCP 8.5, demonstrating the benefits of global-scale climate change mitigation efforts in reducing local urban flood volumes. Comparison of reduced flood volumes between climate change mitigation and local adaptation (by improving drainage systems) scenarios suggests that local adaptation is more effective than climate change mitigation in reducing future flood volumes. This has broad implications for the research community relative to drainage system design and modelling in a changing environment. Furthermore, this study highlights the importance of accounting for local adaptation when coping with future urban floods.« less

  2. Analysis of nodal coverage utilizing image guided radiation therapy for primary gynecologic tumor volumes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed, Faisal; Loma Linda University Medical Center, Department of Radiation Oncology, Loma Linda, CA; Sarkar, Vikren

    Purpose: To evaluate radiation dose delivered to pelvic lymph nodes, if daily Image Guided Radiation Therapy (IGRT) was implemented with treatment shifts based on the primary site (primary clinical target volume [CTV]). Our secondary goal was to compare dosimetric coverage with patient outcomes. Materials and methods: A total of 10 female patients with gynecologic malignancies were evaluated retrospectively after completion of definitive intensity-modulated radiation therapy (IMRT) to their pelvic lymph nodes and primary tumor site. IGRT consisted of daily kilovoltage computed tomography (CT)-on-rails imaging fused with initial planning scans for position verification. The initial plan was created using Varian's Eclipsemore » treatment planning software. Patients were treated with a median radiation dose of 45 Gy (range: 37.5 to 50 Gy) to the primary volume and 45 Gy (range: 45 to 64.8 Gy) to nodal structures. One IGRT scan per week was randomly selected from each patient's treatment course and re-planned on the Eclipse treatment planning station. CTVs were recreated by fusion on the IGRT image series, and the patient's treatment plan was applied to the new image set to calculate delivered dose. We evaluated the minimum, maximum, and 95% dose coverage for primary and nodal structures. Reconstructed primary tumor volumes were recreated within 4.7% of initial planning volume (0.9% to 8.6%), and reconstructed nodal volumes were recreated to within 2.9% of initial planning volume (0.01% to 5.5%). Results: Dosimetric parameters averaged less than 10% (range: 1% to 9%) of the original planned dose (45 Gy) for primary and nodal volumes on all patients (n = 10). For all patients, ≥99.3% of the primary tumor volume received ≥ 95% the prescribed dose (V95%) and the average minimum dose was 96.1% of the prescribed dose. In evaluating nodal CTV coverage, ≥ 99.8% of the volume received ≥ 95% the prescribed dose and the average minimum dose was 93%. In evaluating individual IGRT sessions, we found that 6 patients had an estimated minimal nodal CTV dose less than 90% (range: 78 to 99%) of that planned. With a median follow-up of 42.5 months, 2 patients experienced systemic disease progression at an average of 19.6 months. One patient was found to have a local or regional failure with an average follow-up of 42 months. Conclusion: Using only 3 dimensional IGRT corrections in gynecological radiation allows excellent coverage of the primary target volume and good average nodal CTV coverage. If IGRT corrections are based on alignment to the primary tumor volume, and is only able to be corrected in 3 degrees, this can create situations in which nodal volumes may be under dosed. Utilizing multiple IGRT sessions appears to average out dose discrepancies over the course of treatment. The implication of underdosing in a single IGRT session needs further evaluation in future studies. Based on the concern of minimum dose to a nodal target volume, these findings may signal caution when using IGRT and IMRT in gynecological radiation patients. Possible techniques to overcome this situation may include averaging shifts between tumor and nodal volume, use of a treatment couch with 6° of freedom, deformable registration, or adaptive planning.« less

  3. Predicting performance of polymer-bonded Terfenol-D composites under different magnetic fields

    NASA Astrophysics Data System (ADS)

    Guan, Xinchun; Dong, Xufeng; Ou, Jinping

    2009-09-01

    Considering demagnetization effect, the model used to calculate the magnetostriction of the single particle under the applied field is first created. Based on Eshelby equivalent inclusion and Mori-Tanaka method, the approach to calculate the average magnetostriction of the composites under any applied field, as well as the saturation, is studied by treating the magnetostriction particulate as an eigenstrain. The results calculated by the approach indicate that saturation magnetostriction of magnetostrictive composites increases with an increase of particle aspect and particle volume fraction, and a decrease of Young's modulus of the matrix. The influence of an applied field on magnetostriction of the composites becomes more significant with larger particle volume fraction or particle aspect. Experiments were done to verify the effectiveness of the model, the results of which indicate that the model only can provide approximate results.

  4. Tug-of-war model for the two-bandit problem: nonlocally-correlated parallel exploration via resource conservation.

    PubMed

    Kim, Song-Ju; Aono, Masashi; Hara, Masahiko

    2010-07-01

    We propose a model - the "tug-of-war (TOW) model" - to conduct unique parallel searches using many nonlocally-correlated search agents. The model is based on the property of a single-celled amoeba, the true slime mold Physarum, which maintains a constant intracellular resource volume while collecting environmental information by concurrently expanding and shrinking its branches. The conservation law entails a "nonlocal correlation" among the branches, i.e., volume increment in one branch is immediately compensated by volume decrement(s) in the other branch(es). This nonlocal correlation was shown to be useful for decision making in the case of a dilemma. The multi-armed bandit problem is to determine the optimal strategy for maximizing the total reward sum with incompatible demands, by either exploiting the rewards obtained using the already collected information or exploring new information for acquiring higher payoffs involving risks. Our model can efficiently manage the "exploration-exploitation dilemma" and exhibits good performances. The average accuracy rate of our model is higher than those of well-known algorithms such as the modified -greedy algorithm and modified softmax algorithm, especially, for solving relatively difficult problems. Moreover, our model flexibly adapts to changing environments, a property essential for living organisms surviving in uncertain environments.

  5. Medical Mondays: ED Utilization for Medicaid Recipients Depends on the Day of the Week, Season, and Holidays.

    PubMed

    Castner, Jessica; Yin, Yong; Loomis, Dianne; Hewner, Sharon

    2016-07-01

    The purpose of this study is to describe and explain the temporal and seasonal trends in ED utilization for a low-income population. A retrospective analysis of 66,487 ED Medicaid-insured health care claims in 2009 was conducted for 2 Western New York Counties using time-series analysis with autoregressive moving average (ARMA) models. The final ARMA (2,0) model indicated an autoregressive structure with up to a 2-day lag. ED volume is lower on weekends than on weekdays, and the highest volumes are on Mondays. Summer and fall seasons demonstrated higher volumes, whereas lower volume outliers were associated with holidays. Day of the week was an influential predictor of ED utilization in low-income persons. Season and holidays are also predictors of ED utilization. These calendar-based patterns support the need for ongoing and future emergency leaders' collaborations in community-based care system redesign to meet the health care access needs of low-income persons. Copyright © 2016 Emergency Nurses Association. Published by Elsevier Inc. All rights reserved.

  6. Estimation of carbon dioxide emissions per urban center link unit using data collected by the Advanced Traffic Information System in Daejeon, Korea

    NASA Astrophysics Data System (ADS)

    Ryu, B. Y.; Jung, H. J.; Bae, S. H.; Choi, C. U.

    2013-12-01

    CO2 emissions on roads in urban centers substantially affect global warming. It is important to quantify CO2 emissions in terms of the link unit in order to reduce these emissions on the roads. Therefore, in this study, we utilized real-time traffic data and attempted to develop a methodology for estimating CO2 emissions per link unit. Because of the recent development of the vehicle-to-infrastructure (V2I) communication technology, data from probe vehicles (PVs) can be collected and speed per link unit can be calculated. Among the existing emission calculation methodologies, mesoscale modeling, which is a representative modeling measurement technique, requires speed and traffic data per link unit. As it is not feasible to install fixed detectors at every link for traffic data collection, in this study, we developed a model for traffic volume estimation by utilizing the number of PVs that can be additionally collected when the PV data are collected. Multiple linear regression and an artificial neural network (ANN) were used for estimating the traffic volume. The independent variables and input data for each model are the number of PVs, travel time index (TTI), the number of lanes, and time slots. The result from the traffic volume estimate model shows that the mean absolute percentage error (MAPE) of the ANN is 18.67%, thus proving that it is more effective. The ANN-based traffic volume estimation served as the basis for the calculation of emissions per link unit. The daily average emissions for Daejeon, where this study was based, were 2210.19 ton/day. By vehicle type, passenger cars accounted for 71.28% of the total emissions. By road, Gyeryongro emitted 125.48 ton/day, accounting for 5.68% of the total emission, the highest percentage of all roads. In terms of emissions per kilometer, Hanbatdaero had the highest emission volume, with 7.26 ton/day/km on average. This study proves that real-time traffic data allow an emissions estimate in terms of the link unit. Furthermore, an analysis of CO2 emissions can support traffic management to make decisions related to the reduction of carbon emissions.

  7. Modeling of particle agglomeration in nanofluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishna, K. Hari; Neti, S.; Oztekin, A.

    2015-03-07

    Agglomeration strongly influences the stability or shelf life of nanofluid. The present computational and experimental study investigates the rate of agglomeration quantitatively. Agglomeration in nanofluids is attributed to the net effect of various inter-particle interaction forces. For the nanofluid considered here, a net inter-particle force depends on the particle size, volume fraction, pH, and electrolyte concentration. A solution of the discretized and coupled population balance equations can yield particle sizes as a function of time. Nanofluid prepared here consists of alumina nanoparticles with the average particle size of 150 nm dispersed in de-ionized water. As the pH of the colloid wasmore » moved towards the isoelectric point of alumina nanofluids, the rate of increase of average particle size increased with time due to lower net positive charge on particles. The rate at which the average particle size is increased is predicted and measured for different electrolyte concentration and volume fraction. The higher rate of agglomeration is attributed to the decrease in the electrostatic double layer repulsion forces. The rate of agglomeration decreases due to increase in the size of nano-particle clusters thus approaching zero rate of agglomeration when all the clusters are nearly uniform in size. Predicted rates of agglomeration agree adequate enough with the measured values; validating the mathematical model and numerical approach is employed.« less

  8. Two-order-parameter description of liquid Al under five different pressures

    NASA Astrophysics Data System (ADS)

    Li, Y. D.; Hao, Qing-Hai; Cao, Qi-Long; Liu, C. S.

    2008-11-01

    In the present work, using the glue potential, the constant pressure molecular-dynamics simulations of liquid Al under five various pressures and a systematic analysis of the local atomic structures have been performed in order to test the two-order-parameter model proposed by Tanaka [Phys. Rev. Lett. 80, 5750 (1998)] originally for explaining the unusual behaviors of liquid water. The temperature dependence of the bond order parameter Q6 in liquid Al under five different pressures can be well fitted by the functional expression (Q6)/(1-Q6)=Q60exp((ΔE-PΔV)/(kBT)) which produces the energy gain ΔE and the volume change upon the formation of a locally favored structure: ΔE=0.025eV and ΔV=-0.27(Å)3 . ΔE is nearly equal to the difference between the average bond energy of the other type I bonds and the average bond energy of 1551 bonds (characterizing the icosahedronlike local structure); ΔV could be explained as the average volume occupied by one atom in icosahedra minus that occupied by one atom in other structures. With the obtained ΔE and ΔV , it is satisfactorily explained that the density of liquid Al displays a much weaker nonlinear dependence on temperature under lower pressures. So it is demonstrated that the behavior of liquid Al can be well described by the two-order-parameter model.

  9. Assessment of vasoreactivity using videodensitometry coronary angiography.

    PubMed

    Molloi, Sabee; Berenji, Gholam R; Dang, Trien T; Kassab, Ghassan

    2003-08-01

    Previous studies demonstrated that the dysfunction of vasomotor tone (VT) is closely linked to the development of atherosclerosis and it is considered important in the very early stages of atherogenesis. Currently, the evaluation of VT relies on lumen changes in response to vasoactive stimuli using quantitative coronary angiography (QCA) based on geometric edge detection (ED). However, using ED for measuring lumen diameters is inherently associated with large uncertainties. Videodensitometry (VD) methods have important advantages over ED for QCA. The objective of this study was to investigate the reliability of VD and ED techniques in determining the effect of nitroglycerin (NTG) on cross-sectional area (CSA) and volume changes in a swine animal model for evaluating coronary vasoreactivity. Coronary angiography was performed on four anesthetized swine. CSA and volume were measured in the left anterior descending (LAD) coronary artery using VD before and after intracoronary injection of 0.3 mg of NTG. CSA was also calculated using standard QCA based on ED. The average CSA changes in the proximal, middle and distal branches measured using VD were 23.83% (+/-10.76%), 30.78% (+/-18.39%), and 27.34% (+/-36.53%), respectively. Similarly, the average CSA changes in the proximal, middle, and distal branches measured using ED were 15.02% (+/-36.38%), 22.02% (+/-26.12), and 38.00% (+/-48.31%), respectively. The average lumen volume change measured using VD was 29.79% (+/-14.79%). In order to evaluate the relative reliability of the techniques. the significance of deviation (SOD) was calculated, which is the ratio of the change after NTG and the measurement error. The average SOD for CSA for all the branches based on VD and ED were 1.86 and 0.69, respectively. The SOD for volume measurement was 2.78. Lumen changes measured by VD showed substantial improvement in reliability when compared to the ED. Moreover, VD can be used to measure substantially smaller changes in lumen dimension in response to vasoactive stimuli than the standard QCA based on ED. Finally, VD allows the measurement of arterial volume, which is not possible with ED.

  10. 40 CFR 1045.730 - What ABT reports must I send to EPA?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... volumes for the model year with a point of retail sale in the United States, as described in § 1045.701(j...) Show that your net balance of emission credits from all your participating families in each averaging... errors mistakenly decreased your balance of emission credits, you may correct the errors and recalculate...

  11. Dry granular avalanche impact force on a rigid wall: Analytic shock solution versus discrete element simulations

    NASA Astrophysics Data System (ADS)

    Albaba, Adel; Lambert, Stéphane; Faug, Thierry

    2018-05-01

    The present paper investigates the mean impact force exerted by a granular mass flowing down an incline and impacting a rigid wall of semi-infinite height. First, this granular flow-wall interaction problem is modeled by numerical simulations based on the discrete element method (DEM). These DEM simulations allow computing the depth-averaged quantities—thickness, velocity, and density—of the incoming flow and the resulting mean force on the rigid wall. Second, that problem is described by a simple analytic solution based on a depth-averaged approach for a traveling compressible shock wave, whose volume is assumed to shrink into a singular surface, and which coexists with a dead zone. It is shown that the dead-zone dynamics and the mean force on the wall computed from DEM can be reproduced reasonably well by the analytic solution proposed over a wide range of slope angle of the incline. These results are obtained by feeding the analytic solution with the thickness, the depth-averaged velocity, and the density averaged over a certain distance along the incline rather than flow quantities taken at a singular section before the jump, thus showing that the assumption of a shock wave volume shrinking into a singular surface is questionable. The finite length of the traveling wave upstream of the grains piling against the wall must be considered. The sensitivity of the model prediction to that sampling length remains complicated, however, which highlights the need of further investigation about the properties and the internal structure of the propagating granular wave.

  12. Water input requirements of the rapidly shrinking Dead Sea

    NASA Astrophysics Data System (ADS)

    Abu Ghazleh, Shahrazad; Hartmann, Jens; Jansen, Nils; Kempe, Stephan

    2009-05-01

    The deepest point on Earth, the Dead Sea level, has been dropping alarmingly since 1978 by 0.7 m/a on average due to the accelerating water consumption in the Jordan catchment and stood in 2008 at 420 m below sea level. In this study, a terrain model of the surface area and water volume of the Dead Sea was developed from the Shuttle Radar Topography Mission data using ArcGIS. The model shows that the lake shrinks on average by 4 km2/a in area and by 0.47 km3/a in volume, amounting to a cumulative loss of 14 km3 in the last 30 years. The receding level leaves almost annually erosional terraces, recorded here for the first time by Differential Global Positioning System field surveys. The terrace altitudes were correlated among the different profiles and dated to specific years of the lake level regression, illustrating the tight correlation between the morphology of the terrace sequence and the receding lake level. Our volume-level model described here and previous work on groundwater inflow suggest that the projected Dead Sea-Red Sea channel or the Mediterranean-Dead Sea channel must have a carrying capacity of >0.9 km3/a in order to slowly re-fill the lake to its former level and to create a sustainable system of electricity generation and freshwater production by desalinization. Moreover, such a channel will maintain tourism and potash industry on both sides of the Dead Sea and reduce the natural hazard caused by the recession.

  13. Effects of ambient conditions on the risk of pressure injuries in bedridden patients-multi-physics modelling of microclimate.

    PubMed

    Zeevi, Tal; Levy, Ayelet; Brauner, Neima; Gefen, Amit

    2018-06-01

    Scientific evidence regarding microclimate and its effects on the risk of pressure ulcers (PU) remains sparse. It is known that elevated skin temperatures and moisture may affect metabolic demand as well as the mechanical behaviour of the tissue. In this study, we incorporated these microclimate factors into a novel, 3-dimensional multi-physics coupled model of the human buttocks, which simultaneously determines the biothermal and biomechanical behaviours of the buttocks in supine lying on different support surfaces. We compared 3 simulated thermally controlled mattresses with 2 reference foam mattresses. A tissue damage score was numerically calculated in a relevant volume of the model, and the cooling effect of each 1°C decrease of tissue temperature was deduced. Damage scores of tissues were substantially lower for the non-foam mattresses compared with the foams. The percentage tissue volume at risk within the volume of interest was found to grow exponentially as the average tissue temperature increased. The resultant average sacral skin temperature was concluded to be a good predictor for an increased risk of PU/injuries. Each 1°C increase contributes approximately 14 times as much to the risk with respect to an increase of 1 mmHg of pressure. These findings highlight the advantages of using thermally controlled support surfaces as well as the need to further assess the potential damage that may be caused by uncontrolled microclimate conditions on inadequate support surfaces in at-risk patients. © 2017 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  14. Comparison of orbital volume obtained by tomography and rapid prototyping.

    PubMed

    Roça, Guilherme Berto; Foggiatto, José Aguiomar; Ono, Maria Cecilia Closs; Ono, Sergio Eiji; da Silva Freitas, Renato

    2013-11-01

    This study aims to compare orbital volume obtained by helical tomography and rapid prototyping. The study sample was composed of 6 helical tomography scans. Eleven healthy orbits were identified to have their volumes measured. The volumetric analysis with the helical tomography utilized the same protocol developed by the Plastic Surgery Unit of the Federal University of Paraná. From the CT images, 11 prototypes were created, and their respective volumes were analyzed in 2 ways: using software by SolidWorks and by direct analysis, when the prototype was filled with saline solution. For statistical analysis, the results of the volumes of the 11 orbits were considered independent. The average orbital volume measurements obtained by the method of Ono et al was 20.51 cm, the average obtained by the SolidWorks program was 20.64 cm, and the average measured using the prototype method was 21.81 cm. The 3 methods demonstrated a strong correlation between the measurements. The right and left orbits of each patient had similar volumes. The tomographic method for the analysis of orbital volume using the Ono protocol yielded consistent values, and by combining this method with rapid prototyping, both reliability validations of results were enhanced.

  15. Models to predict both sensible and latent heat transfer in the respiratory tract of Morada Nova sheep under semiarid tropical environment

    NASA Astrophysics Data System (ADS)

    Fonseca, Vinícius Carvalho; Saraiva, Edilson Paes; Maia, Alex Sandro Campos; Nascimento, Carolina Cardoso Nagib; da Silva, Josinaldo Araújo; Pereira, Walter Esfraim; Filho, Edgard Cavalcanti Pimenta; Almeida, Maria Elivânia Vieira

    2017-05-01

    The aim of this study was to build a prediction model both sensible and latent heat transfer by respiratory tract for Morada Nova sheep under field conditions in a semiarid tropical environment, using easily measured physiological and environmental parameters. Twelve dry Morada Nova ewes with an average of 3 ± 1.2 years old and average body weight of 32.76 ± 3.72 kg were used in a Latin square design 12 × 12 (12 days of records and 12 schedules). Tidal volume, respiratory rate, expired air temperature, and partial vapor pressure of the expired air were obtained from the respiratory facial mask and using a physiological measurement system. Ewes were evaluated from 0700 to 1900 h in each day under shade. A simple nonlinear model to estimate tidal volume as a function of respiratory rate was developed. Equation to estimate the expired air temperature was built, and the ambient air temperature was the best predictor together with relative humidity and ambient vapor pressure. In naturalized Morada Nova sheep, respiratory convection seems to be a mechanism of heat transfer of minor importance even under mild air temperature. Evaporation from the respiratory system increased together with ambient air temperature. At ambient air temperature, up to 35 °C respiratory evaporation accounted 90 % of the total heat lost by respiratory system, on average. Models presented here allow to estimate the heat flow from the respiratory tract for Morada Nova sheep bred in tropical region, using easily measured physiological and environmental traits as respiratory rate, ambient air temperature, and relative humidity.

  16. Measurement of Coherent π+ Production in Low Energy Neutrino-Carbon Scattering

    NASA Astrophysics Data System (ADS)

    Abe, K.; Andreopoulos, C.; Antonova, M.; Aoki, S.; Ariga, A.; Assylbekov, S.; Autiero, D.; Ban, S.; Barbi, M.; Barker, G. J.; Barr, G.; Bartet-Friburg, P.; Batkiewicz, M.; Bay, F.; Berardi, V.; Berkman, S.; Bhadra, S.; Blondel, A.; Bolognesi, S.; Bordoni, S.; Boyd, S. B.; Brailsford, D.; Bravar, A.; Bronner, C.; Buizza Avanzini, M.; Calland, R. G.; Campbell, T.; Cao, S.; Caravaca Rodríguez, J.; Cartwright, S. L.; Castillo, R.; Catanesi, M. G.; Cervera, A.; Cherdack, D.; Chikuma, N.; Christodoulou, G.; Clifton, A.; Coleman, J.; Collazuol, G.; Coplowe, D.; Cremonesi, L.; Dabrowska, A.; De Rosa, G.; Dealtry, T.; Denner, P. F.; Dennis, S. R.; Densham, C.; Dewhurst, D.; Di Lodovico, F.; Di Luise, S.; Dolan, S.; Drapier, O.; Duffy, K. E.; Dumarchez, J.; Dytman, S.; Dziewiecki, M.; Emery-Schrenk, S.; Ereditato, A.; Feusels, T.; Finch, A. J.; Fiorentini, G. A.; Friend, M.; Fujii, Y.; Fukuda, D.; Fukuda, Y.; Furmanski, A. P.; Galymov, V.; Garcia, A.; Giffin, S. G.; Giganti, C.; Gizzarelli, F.; Gonin, M.; Grant, N.; Hadley, D. R.; Haegel, L.; Haigh, M. D.; Hamilton, P.; Hansen, D.; Harada, J.; Hara, T.; Hartz, M.; Hasegawa, T.; Hastings, N. C.; Hayashino, T.; Hayato, Y.; Helmer, R. L.; Hierholzer, M.; Hillairet, A.; Himmel, A.; Hiraki, T.; Hirota, S.; Hogan, M.; Holeczek, J.; Horikawa, S.; Hosomi, F.; Huang, K.; Ichikawa, A. K.; Ieki, K.; Ikeda, M.; Imber, J.; Insler, J.; Intonti, R. A.; Irvine, T. J.; Ishida, T.; Ishii, T.; Iwai, E.; Iwamoto, K.; Izmaylov, A.; Jacob, A.; Jamieson, B.; Jiang, M.; Johnson, S.; Jo, J. H.; Jonsson, P.; Jung, C. K.; Kabirnezhad, M.; Kaboth, A. C.; Kajita, T.; Kakuno, H.; Kameda, J.; Karlen, D.; Karpikov, I.; Katori, T.; Kearns, E.; Khabibullin, M.; Khotjantsev, A.; Kielczewska, D.; Kikawa, T.; Kim, H.; Kim, J.; King, S.; Kisiel, J.; Knight, A.; Knox, A.; Kobayashi, T.; Koch, L.; Koga, T.; Konaka, A.; Kondo, K.; Kopylov, A.; Kormos, L. L.; Korzenev, A.; Koshio, Y.; Kropp, W.; Kudenko, Y.; Kurjata, R.; Kutter, T.; Lagoda, J.; Lamont, I.; Larkin, E.; Lasorak, P.; Laveder, M.; Lawe, M.; Lazos, M.; Lindner, T.; Liptak, Z. J.; Litchfield, R. P.; Li, X.; Longhin, A.; Lopez, J. P.; Ludovici, L.; Lu, X.; Magaletti, L.; Mahn, K.; Malek, M.; Manly, S.; Marino, A. D.; Marteau, J.; Martin, J. F.; Martins, P.; Martynenko, S.; Maruyama, T.; Matveev, V.; Mavrokoridis, K.; Ma, W. Y.; Mazzucato, E.; McCarthy, M.; McCauley, N.; McFarland, K. S.; McGrew, C.; Mefodiev, A.; Metelko, C.; Mezzetto, M.; Mijakowski, P.; Minamino, A.; Mineev, O.; Mine, S.; Missert, A.; Miura, M.; Moriyama, S.; Mueller, Th. A.; Murphy, S.; Myslik, J.; Nakadaira, T.; Nakahata, M.; Nakamura, K. G.; Nakamura, K.; Nakamura, K. D.; Nakayama, S.; Nakaya, T.; Nakayoshi, K.; Nantais, C.; Nielsen, C.; Nirkko, M.; Nishikawa, K.; Nishimura, Y.; Novella, P.; Nowak, J.; O'Keeffe, H. M.; Ohta, R.; Okumura, K.; Okusawa, T.; Oryszczak, W.; Oser, S. M.; Ovsyannikova, T.; Owen, R. A.; Oyama, Y.; Palladino, V.; Palomino, J. L.; Paolone, V.; Patel, N. D.; Pavin, M.; Payne, D.; Perkin, J. D.; Petrov, Y.; Pickard, L.; Pickering, L.; Pinzon Guerra, E. S.; Pistillo, C.; Popov, B.; Posiadala-Zezula, M.; Poutissou, J.-M.; Poutissou, R.; Przewlocki, P.; Quilain, B.; Radermacher, T.; Radicioni, E.; Ratoff, P. N.; Ravonel, M.; Rayner, M. A. M.; Redij, A.; Reinherz-Aronis, E.; Riccio, C.; Rojas, P.; Rondio, E.; Roth, S.; Rubbia, A.; Rychter, A.; Sacco, R.; Sakashita, K.; Sánchez, F.; Sato, F.; Scantamburlo, E.; Scholberg, K.; Schoppmann, S.; Schwehr, J.; Scott, M.; Seiya, Y.; Sekiguchi, T.; Sekiya, H.; Sgalaberna, D.; Shah, R.; Shaikhiev, A.; Shaker, F.; Shaw, D.; Shiozawa, M.; Shirahige, T.; Short, S.; Smy, M.; Sobczyk, J. T.; Sobel, H.; Sorel, M.; Southwell, L.; Stamoulis, P.; Steinmann, J.; Stewart, T.; Stowell, P.; Suda, Y.; Suvorov, S.; Suzuki, A.; Suzuki, K.; Suzuki, S. Y.; Suzuki, Y.; Tacik, R.; Tada, M.; Takahashi, S.; Takeda, A.; Takeuchi, Y.; Tanaka, H. K.; Tanaka, H. A.; Terhorst, D.; Terri, R.; Thakore, T.; Thompson, L. F.; Tobayama, S.; Toki, W.; Tomura, T.; Touramanis, C.; Tsukamoto, T.; Tzanov, M.; Uchida, Y.; Vacheret, A.; Vagins, M.; Vallari, Z.; Vasseur, G.; Wachala, T.; Wakamatsu, K.; Walter, C. W.; Wark, D.; Warzycha, W.; Wascko, M. O.; Weber, A.; Wendell, R.; Wilkes, R. J.; Wilking, M. J.; Wilkinson, C.; Wilson, J. R.; Wilson, R. J.; Yamada, Y.; Yamamoto, K.; Yamamoto, M.; Yanagisawa, C.; Yano, T.; Yen, S.; Yershov, N.; Yokoyama, M.; Yoo, J.; Yoshida, K.; Yuan, T.; Yu, M.; Zalewska, A.; Zalipska, J.; Zambelli, L.; Zaremba, K.; Ziembicki, M.; Zimmerman, E. D.; Zito, M.; Żmuda, J.; T2K Collaboration

    2016-11-01

    We report the first measurement of the flux-averaged cross section for charged current coherent π+ production on carbon for neutrino energies less than 1.5 GeV, and with a restriction on the final state phase space volume in the T2K near detector, ND280. Comparisons are made with predictions from the Rein-Sehgal coherent production model and the model by Alvarez-Ruso et al., the latter representing the first implementation of an instance of the new class of microscopic coherent models in a neutrino interaction Monte Carlo event generator. We observe a clear event excess above background, disagreeing with the null results reported by K2K and SciBooNE in a similar neutrino energy region. The measured flux-averaged cross sections are below those predicted by both the Rein-Sehgal and Alvarez-Ruso et al. models.

  17. Longitudinal predictors of aerobic performance in adolescent soccer players.

    PubMed

    Valente-dos-Santos, João; Coelho-e-Silva, Manuel J; Duarte, João; Figueiredo, António J; Liparotti, João R; Sherar, Lauren B; Elferink-Gemser, Marije T; Malina, Robert M

    2012-01-01

    The importance of aerobic performance in youth soccer is well established. The aim of the present study was to evaluate the contributions of chronological age (CA), skeletal age (SA), body size, and training to the longitudinal development of aerobic performance in youth male soccer players aged 10 to 18 years. Players (n=83) were annually followed up during 5 years, resulting in an average of 4.4 observations per player. Decimal CA was calculated, and SA, stature, body weight, and aerobic performance were measured once per year. Fat-free mass (FFM) was estimated from age- and gender-specific anthropometric formulas, and annual volume training was recorded. After testing for multicollinearity, multilevel regression modeling was used to analyze the longitudinal data aligned by CA and SA (Model 1 and 2, respectively) and to develop aerobic performance scores. The following equations provide estimations of the aerobic performance for young soccer players: ŷ(Model 1 [deviance from the null model =388.50; P<0.01]) =57.75+9.06×centered CA-0.57×centered CA(2)+0.03×annual volume training and ŷ(Model 2 [deviance from the null model=327.98; P<0.01])=13.03+4.04×centered SA-0.12×centered SA(2)+0.99×FFM+0.03×annual volume training. The development of aerobic performance in young soccer players was found to be significantly related to CA, biological development, and volume of training.

  18. Lake-level frequency analysis for Devils Lake, North Dakota

    USGS Publications Warehouse

    Wiche, Gregg J.; Vecchia, Aldo V.

    1996-01-01

    Two approaches were used to estimate future lake-level probabilities for Devils Lake. The first approach is based on an annual lake-volume model, and the second approach is based on a statistical water mass-balance model that generates seasonal lake volumes on the basis of seasonal precipitation, evaporation, and inflow. Autoregressive moving average models were used to model the annual mean lake volume and the difference between the annual maximum lake volume and the annual mean lake volume. Residuals from both models were determined to be uncorrelated with zero mean and constant variance. However, a nonlinear relation between the residuals of the two models was included in the final annual lakevolume model.Because of high autocorrelation in the annual lake levels of Devils Lake, the annual lake-volume model was verified using annual lake-level changes. The annual lake-volume model closely reproduced the statistics of the recorded lake-level changes for 1901-93 except for the skewness coefficient. However, the model output is less skewed than the data indicate because of some unrealistically large lake-level declines. The statistical water mass-balance model requires as inputs seasonal precipitation, evaporation, and inflow data for Devils Lake. Analysis of annual precipitation, evaporation, and inflow data for 1950-93 revealed no significant trends or long-range dependence so the input time series were assumed to be stationary and short-range dependent.Normality transformations were used to approximately maintain the marginal probability distributions; and a multivariate, periodic autoregressive model was used to reproduce the correlation structure. Each of the coefficients in the model is significantly different from zero at the 5-percent significance level. Coefficients relating spring inflow from one year to spring and fall inflows from the previous year had the largest effect on the lake-level frequency analysis.Inclusion of parameter uncertainty in the model for generating precipitation, evaporation, and inflow indicates that the upper lake-level exceedance levels from the water mass-balance model are particularly sensitive to parameter uncertainty. The sensitivity in the upper exceedance levels was caused almost entirely by uncertainty in the fitted probability distributions of the quarterly inflows. A method was developed for using long-term streamflow data for the Red River of the North at Grand Forks to reduce the variance in the estimated mean.Comparison of the annual lake-volume model and the water mass-balance model indicates the upper exceedance levels of the water mass-balance model increase much more rapidly than those of the annual lake-volume model. As an example, for simulation year 5, the 99-percent exceedance for the lake level is 1,417.6 feet above sea level for the annual lake-volume model and 1,423.2 feet above sea level for the water mass-balance model. The rapid increase is caused largely by the record precipitation and inflow in the summer and fall of 1993. Because the water mass-balance model produces lake-level traces that closely match the hydrology of Devils Lake, the water mass-balance model is superior to the annual lake-volume model for computing exceedance levels for the 50-year planning horizon.

  19. Lake-level frequency analysis for Devils Lake, North Dakota

    USGS Publications Warehouse

    Wiche, Gregg J.; Vecchia, Aldo V.

    1995-01-01

    Two approaches were used to estimate future lake-level probabilities for Devils Lake. The first approach is based on an annual lake-volume model, and the second approach is based on a statistical water mass-balance model that generates seasonal lake volumes on the basis of seasonal precipitation, evaporation, and inflow.Autoregressive moving average models were used to model the annual mean lake volume and the difference between the annual maximum lake volume and the annual mean lake volume. Residuals from both models were determined to be uncorrelated with zero mean and constant variance. However, a nonlinear relation between the residuals of the two models was included in the final annual lake-volume model.Because of high autocorrelation in the annual lake levels of Devils Lake, the annual lakevolume model was verified using annual lake-level changes. The annual lake-volume model closely reproduced the statistics of the recorded lake-level changes for 1901-93 except for the skewness coefficient However, the model output is less skewed than the data indicate because of some unrealistically large lake-level declines.The statistical water mass-balance model requires as inputs seasonal precipitation, evaporation, and inflow data for Devils Lake. Analysis of annual precipitation, evaporation, and inflow data for 1950-93 revealed no significant trends or long-range dependence so the input time series were assumed to be stationary and short-range dependent.Normality transformations were used to approximately maintain the marginal probability distributions; and a multivariate, periodic autoregressive model was used to reproduce the correlation structure. Each of the coefficients in the model is significantly different from zero at the 5-percent significance level. Coefficients relating spring inflow from one year to spring and fall inflows from the previous year had the largest effect on the lake-level frequency analysis.Inclusion of parameter uncertainty in the model for generating precipitation, evaporation, and inflow indicates that the upper lake-level exceedance levels from the water mass-balance model are particularly sensitive to parameter uncertainty. The sensitivity in the upper exceedance levels was caused almost entirely by uncertainty in the fitted probability distributions of the quarterly inflows. A method was developed for using long-term streamflow data for the Red River of the North at Grand Forks to reduce the variance in the estimated mean. Comparison of the annual lake-volume model and the water mass-balance model indicates the upper exceedance levels of the water mass-balance model increase much more rapidly than those of the annual lake-volume model. As an example, for simulation year 5, the 99-percent exceedance for the lake level is 1,417.6 feet above sea level for the annual lake-volume model and 1,423.2 feet above sea level for the water mass-balance model. The rapid increase is caused largely by the record precipitation and inflow in the summer and fall of 1993. Because the water mass-balance model produces lake-level traces that closely match the hydrology of Devils Lake, the water mass-balance model is superior to the annual lake-volume model for computing exceedance levels for the 50-year planning horizon.

  20. Experimental investigation, model development and sensitivity analysis of rheological behavior of ZnO/10W40 nano-lubricants for automotive applications

    NASA Astrophysics Data System (ADS)

    Hemmat Esfe, Mohammad; Saedodin, Seyfolah; Rejvani, Mousa; Shahram, Jalal

    2017-06-01

    In the present study, rheological behavior of ZnO/10W40 nano-lubricant is investigated by an experimental approach. Firstly, ZnO nanoparticles of 10-30 nm were dispersed in 10W40 engine oil with solid volume fractions of 0.25-2%, then the viscosity of the composed nano-lubricant was measured in temperature ranges of 5-55 °C and in various shear rates. From analyzing the results, it was revealed that both of the base oil and nano-lubricants are non-Newtonian fluids which exhibit shear thinning behavior. Sensitivity of viscosity to the solid volume fraction enhancement was calculated by a new correlation which was proposed in terms of solid volume fraction and temperature. In order to attain an accurate model by which experimental data are predicted, an artificial neural network (ANN) with a hidden layer and 5 neurons was designed. This model was considerably accurate in predicting experimental data of dynamic viscosity as R-squared and average absolute relative deviation (AARD %) were respectively 0.9999 and 0.0502.

  1. Derivation and Validation of Supraglacial Lake Volumes on the Greenland Ice Sheet from High-Resolution Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Moussavi, Mahsa S.; Abdalati, Waleed; Pope, Allen; Scambos, Ted; Tedesco, Marco; MacFerrin, Michael; Grigsby, Shane

    2016-01-01

    Supraglacial meltwater lakes on the western Greenland Ice Sheet (GrIS) are critical components of its surface hydrology and surface mass balance, and they also affect its ice dynamics. Estimates of lake volume, however, are limited by the availability of in situ measurements of water depth,which in turn also limits the assessment of remotely sensed lake depths. Given the logistical difficulty of collecting physical bathymetric measurements, methods relying upon in situ data are generally restricted to small areas and thus their application to largescale studies is difficult to validate. Here, we produce and validate spaceborne estimates of supraglacial lake volumes across a relatively large area (1250 km(exp 2) of west Greenland's ablation region using data acquired by the WorldView-2 (WV-2) sensor, making use of both its stereo-imaging capability and its meter-scale resolution. We employ spectrally-derived depth retrieval models, which are either based on absolute reflectance (single-channel model) or a ratio of spectral reflectances in two bands (dual-channel model). These models are calibrated by usingWV-2multispectral imagery acquired early in the melt season and depth measurements from a high resolutionWV-2 DEM over the same lake basins when devoid of water. The calibrated models are then validated with different lakes in the area, for which we determined depths. Lake depth estimates based on measurements recorded in WV-2's blue (450-510 nm), green (510-580 nm), and red (630-690 nm) bands and dual-channel modes (blue/green, blue/red, and green/red band combinations) had near-zero bias, an average root-mean-squared deviation of 0.4 m (relative to post-drainage DEMs), and an average volumetric error of b1%. The approach outlined in this study - image-based calibration of depth-retrieval models - significantly improves spaceborne supraglacial bathymetry retrievals, which are completely independent from in situ measurements.

  2. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    PubMed

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. PREOPERATIVE COMPUTED TOMOGRAPHY VOLUMETRY AND GRAFT WEIGHT ESTIMATION IN ADULT LIVING DONOR LIVER TRANSPLANTATION

    PubMed Central

    PINHEIRO, Rafael S.; CRUZ-JR, Ruy J.; ANDRAUS, Wellington; DUCATTI, Liliana; MARTINO, Rodrigo B.; NACIF, Lucas S.; ROCHA-SANTOS, Vinicius; ARANTES, Rubens M; LAI, Quirino; IBUKI, Felicia S.; ROCHA, Manoel S.; D´ALBUQUERQUE, Luiz A. C.

    2017-01-01

    ABSTRACT Background: Computed tomography volumetry (CTV) is a useful tool for predicting graft weights (GW) for living donor liver transplantation (LDLT). Few studies have examined the correlation between CTV and GW in normal liver parenchyma. Aim: To analyze the correlation between CTV and GW in an adult LDLT population and provide a systematic review of the existing mathematical models to calculate partial liver graft weight. Methods: Between January 2009 and January 2013, 28 consecutive donors undergoing right hepatectomy for LDLT were retrospectively reviewed. All grafts were perfused with HTK solution. Estimated graft volume was estimated by CTV and these values were compared to the actual graft weight, which was measured after liver harvesting and perfusion. Results: Median actual GW was 782.5 g, averaged 791.43±136 g and ranged from 520-1185 g. Median estimated graft volume was 927.5 ml, averaged 944.86±200.74 ml and ranged from 600-1477 ml. Linear regression of estimated graft volume and actual GW was significantly linear (GW=0.82 estimated graft volume, r2=0.98, slope=0.47, standard deviation of 0.024 and p<0.0001). Spearman Linear correlation was 0.65 with 95% CI of 0.45 - 0.99 (p<0.0001). Conclusion: The one-to-one rule did not applied in patients with normal liver parenchyma. A better estimation of graft weight could be reached by multiplying estimated graft volume by 0.82. PMID:28489167

  4. PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 1: Analysis description

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady

    1990-01-01

    A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 1 is the Analysis Description, and describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.

  5. Cubic-foot tree volumes and product recoveries for eastern redcedar in the Ozarks

    Treesearch

    Leland F. Hanks

    1979-01-01

    Tree volume tables and equations for eastern redcedar are presented for gross volume, cant volume, and volume of sawmill residue. These volumes, when multiplied by the average value per cubic foot of cants and residue, provide a way to estimate tree value.

  6. A time-dependent model to determine the thermal conductivity of a nanofluid

    NASA Astrophysics Data System (ADS)

    Myers, T. G.; MacDevette, M. M.; Ribera, H.

    2013-07-01

    In this paper, we analyse the time-dependent heat equations over a finite domain to determine expressions for the thermal diffusivity and conductivity of a nanofluid (where a nanofluid is a fluid containing nanoparticles with average size below 100 nm). Due to the complexity of the standard mathematical analysis of this problem, we employ a well-known approximate solution technique known as the heat balance integral method. This allows us to derive simple analytical expressions for the thermal properties, which appear to depend primarily on the volume fraction and liquid properties. The model is shown to compare well with experimental data taken from the literature even up to relatively high concentrations and predicts significantly higher values than the Maxwell model for volume fractions approximately >1 %. The results suggest that the difficulty in reproducing the high values of conductivity observed experimentally may stem from the use of a static heat flow model applied over an infinite domain rather than applying a dynamic model over a finite domain.

  7. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  8. Gravel-Sand-Clay Mixture Model for Predictions of Permeability and Velocity of Unconsolidated Sediments

    NASA Astrophysics Data System (ADS)

    Konishi, C.

    2014-12-01

    Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation. Furthermore, elastic properties are obtainable by general Hashin-Shtrikman-Walpole bounds. The predicted results by this new mixture model are qualitatively consistent with laboratory measurements and well log obtained for unconsolidated sediments. Acknowledgement: A part of this study was accomplished with a subsidy of River Environment Fund of Japan.

  9. Ex Vivo Liver Experiment of Hydrochloric Acid-Infused and Saline-Infused Monopolar Radiofrequency Ablation: Better Outcomes in Temperature, Energy, and Coagulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Xiong-ying; Gu, Yang-kui; Huang, Jin-hua, E-mail: huangjh@sysucc.org.cn

    ObjectiveTo compare temperature, energy, and coagulation between hydrochloric acid-infused radiofrequency ablation (HAIRFA) and normal saline-infused radiofrequency ablation (NSIRFA) in ex vivo porcine liver model.Materials and Methods30 fresh porcine livers were excised in 60 lesions, 30 with HAIRFA and the other 30 with NSIRFA. Both modalities used monopolar perfusion electrode connected to a RF generator set at 103 °C and 30 W. In each group, ablation time was set at 10, 20, or 30 min (10 lesions from each group at each time). We compared tissue temperatures (at 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0 cm away from the electrode tip), average power, deposited energy,more » deposited energy per coagulation volume (DEV), coagulation diameters, coagulative volume, and spherical ratio between the two groups.ResultsTemperature–time curves showed that HAIRFA provided progressively greater heating than that of NSIRFA. At 30 min, mean average power, deposited energy, coagulation volumes (113.67 vs. 12.28 cm{sup 3}) and diameters, and increasing in tissue temperature were much greater with HAIRFA (P < 0.001 for all), except DEV was lower (456 vs. 1396 J/cm{sup 3}, P < 0.001). The spherical ratio was closer to 1 with HAIRFA (1.23 vs. 1.46). Coagulation diameters, volume, and average power of HAIRFA increased significantly with longer ablation times. While with NSIRFA, these characteristics were stable till later 20 min, except the power decreased with longer ablation times.ConclusionsHAIRFA creates much larger and more spherical lesions by increasing overall energy deposition, modulating thermal conductivity, and transferring heat during ablation.« less

  10. Effects of Long-Term Low-Level Radiofrequency Radiation Exposure on Rats. Volume 2. Average SAR and SAR Distribution in Man Exposed to 450-MHz RFR.

    DTIC Science & Technology

    1983-09-01

    adult man (full-scale height = 171 cm) and child (full-scale height = 86 cm), with arms down. We used the full-scale figure to reflect a worst-case... child for all orientations was much higher than that for the adult (e.g., 0.187 W/kg versus 0.063 W/kg), which is expected since the frequency is closer...to the resonance frequency for the child . Another series of scale-model measurements was conducted for determination of the average SAR values for

  11. Volume-Of-Fluid Simulation for Predicting Two-Phase Cooling in a Microchannel

    NASA Astrophysics Data System (ADS)

    Gorle, Catherine; Parida, Pritish; Houshmand, Farzad; Asheghi, Mehdi; Goodson, Kenneth

    2014-11-01

    Two-phase flow in microfluidic geometries has applications of increasing interest for next generation electronic and optoelectronic systems, telecommunications devices, and vehicle electronics. While there has been progress on comprehensive simulation of two-phase flows in compact geometries, validation of the results in different flow regimes should be considered to determine the predictive capabilities. In the present study we use the volume-of-fluid method to model the flow through a single micro channel with cross section 100 × 100 μm and length 10 mm. The channel inlet mass flux and the heat flux at the lower wall result in a subcooled boiling regime in the first 2.5 mm of the channel and a saturated flow regime further downstream. A conservation equation for the vapor volume fraction, and a single set of momentum and energy equations with volume-averaged fluid properties are solved. A reduced-physics phase change model represents the evaporation of the liquid and the corresponding heat loss, and the surface tension is accounted for by a source term in the momentum equation. The phase change model used requires the definition of a time relaxation parameter, which can significantly affect the solution since it determines the rate of evaporation. The results are compared to experimental data available from literature, focusing on the capability of the reduced-physics phase change model to predict the correct flow pattern, temperature profile and pressure drop.

  12. A case study examining the efficacy of drainage setbacks for limiting effects to wetlands in the Prairie Pothole Region, USA

    USGS Publications Warehouse

    Tangen, Brian; Finocchiaro, Raymond

    2017-01-01

    The enhancement of agricultural lands through the use of artificial drainage systems is a common practice throughout the United States, and recently the use of this practice has expanded in the Prairie Pothole Region. Many wetlands are afforded protection from the direct effects of drainage through regulation or legal agreements, and drainage setback distances typically are used to provide a buffer between wetlands and drainage systems. A field study was initiated to assess the potential for subsurface drainage to affect wetland surface-water characteristics through a reduction in precipitation runoff, and to examine the efficacy of current U.S. Department of Agriculture drainage setback distances for limiting these effects. Surface-water levels, along with primary components of the catchment water balance, were monitored over 3 y at four seasonal wetland catchments situated in a high-relief terrain (7–11% slopes). During the second year of the study, subsurface drainage systems were installed in two of the catchments using drainage setbacks, and the drainage discharge volumes were monitored. A catchment water-balance model was used to assess the potential effect of subsurface drainage on wetland hydrology and to assess the efficacy of drainage setbacks for mitigating these effects. Results suggest that overland precipitation runoff can be an important component of the seasonal water balance of Prairie Pothole Region wetlands, accounting on average for 34% (19–49%) or 45% (39–49%) of the annual (includes snowmelt runoff) or seasonal (does not include snowmelt) input volumes, respectively. Seasonal (2014–2015) discharge volumes from the localized drainage systems averaged 81 m3 (31–199 m3), and were small when compared with average combined inputs of 3,745 m3 (1,214–6,993 m3) from snowmelt runoff, direct precipitation, and precipitation runoff. Model simulations of reduced precipitation runoff volumes as a result of subsurface drainage systems showed that ponded wetland surface areas were reduced by an average of 590 m2 (141–1,787 m2), or 24% (3–46%), when no setbacks were used (drainage systems located directly adjacent to wetland). Likewise, wetland surface areas were reduced by an average of 141 m2 (23–464 m2), or 7% (1–28%), when drainage setbacks (buffer) were used. In totality, the field data and model simulations suggest that the drainage setbacks should reduce, but not eliminate, impacts to the water balance of the four wetlands monitored in this study that were located in a high-relief terrain. However, further study is required to assess the validity of these conclusions outside of the limited parameters (e.g., terrain, weather, soils) of this study and to examine potential ecological effects of altered wetland hydrology.

  13. In vivo comparison of simultaneous versus sequential injection technique for thermochemical ablation in a porcine model.

    PubMed

    Cressman, Erik N K; Shenoi, Mithun M; Edelman, Theresa L; Geeslin, Matthew G; Hennings, Leah J; Zhang, Yan; Iaizzo, Paul A; Bischof, John C

    2012-01-01

    To investigate simultaneous and sequential injection thermochemical ablation in a porcine model, and compare them to sham and acid-only ablation. This IACUC-approved study involved 11 pigs in an acute setting. Ultrasound was used to guide placement of a thermocouple probe and coaxial device designed for thermochemical ablation. Solutions of 10 M acetic acid and NaOH were used in the study. Four injections per pig were performed in identical order at a total rate of 4 mL/min: saline sham, simultaneous, sequential, and acid only. Volume and sphericity of zones of coagulation were measured. Fixed specimens were examined by H&E stain. Average coagulation volumes were 11.2 mL (simultaneous), 19.0 mL (sequential) and 4.4 mL (acid). The highest temperature, 81.3°C, was obtained with simultaneous injection. Average temperatures were 61.1°C (simultaneous), 47.7°C (sequential) and 39.5°C (acid only). Sphericity coefficients (0.83-0.89) had no statistically significant difference among conditions. Thermochemical ablation produced substantial volumes of coagulated tissues relative to the amounts of reagents injected, considerably greater than acid alone in either technique employed. The largest volumes were obtained with sequential injection, yet this came at a price in one case of cardiac arrest. Simultaneous injection yielded the highest recorded temperatures and may be tolerated as well as or better than acid injection alone. Although this pilot study did not show a clear advantage for either sequential or simultaneous methods, the results indicate that thermochemical ablation is attractive for further investigation with regard to both safety and efficacy.

  14. Modeling of macrosegregation caused by volumetric deformation in a coherent mushy zone

    NASA Astrophysics Data System (ADS)

    Nicolli, Lilia C.; Mo, Asbjørn; M'hamdi, Mohammed

    2005-02-01

    A two-phase volume-averaged continuum model is presented that quantifies macrosegregation formation during solidification of metallic alloys caused by deformation of the dendritic network and associated melt flow in the coherent part of the mushy zone. Also, the macrosegregation formation associated with the solidification shrinkage (inverse segregation) is taken into account. Based on experimental evidence established elsewhere, volumetric viscoplastic deformation (densification/dilatation) of the coherent dendritic network is included in the model. While the thermomechanical model previously outlined (M. M’Hamdi, A. Mo, and C.L. Martin: Metall. Mater. Trans. A, 2002, vol. 33A, pp. 2081-93) has been used to calculate the temperature and velocity fields associated with the thermally induced deformations and shrinkage driven melt flow, the solute conservation equation including both the liquid and a solid volume-averaged velocity is solved in the present study. In modeling examples, the macrosegregation formation caused by mechanically imposed as well as by thermally induced deformations has been calculated. The modeling results for an Al-4 wt pct Cu alloy indicate that even quite small volumetric strains (≈2 pct), which can be associated with thermally induced deformations, can lead to a macroscopic composition variation in the final casting comparable to that resulting from the solidification shrinkage induced melt flow. These results can be explained by the relatively large volumetric viscoplastic deformation in the coherent mush resulting from the applied constitutive model, as well as the relatively large difference in composition for the studied Al-Cu alloy in the solid and liquid phases at high solid fractions at which the deformation takes place.

  15. Dimension reduction method for SPH equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, Alexandre M.; Scheibe, Timothy D.

    2011-08-26

    Smoothed Particle Hydrodynamics model of a complex multiscale processe often results in a system of ODEs with an enormous number of unknowns. Furthermore, a time integration of the SPH equations usually requires time steps that are smaller than the observation time by many orders of magnitude. A direct solution of these ODEs can be extremely expensive. Here we propose a novel dimension reduction method that gives an approximate solution of the SPH ODEs and provides an accurate prediction of the average behavior of the modeled system. The method consists of two main elements. First, effective equationss for evolution of averagemore » variables (e.g. average velocity, concentration and mass of a mineral precipitate) are obtained by averaging the SPH ODEs over the entire computational domain. These effective ODEs contain non-local terms in the form of volume integrals of functions of the SPH variables. Second, a computational closure is used to close the system of the effective equations. The computational closure is achieved via short bursts of the SPH model. The dimension reduction model is used to simulate flow and transport with mixing controlled reactions and mineral precipitation. An SPH model is used model transport at the porescale. Good agreement between direct solutions of the SPH equations and solutions obtained with the dimension reduction method for different boundary conditions confirms the accuracy and computational efficiency of the dimension reduction model. The method significantly accelerates SPH simulations, while providing accurate approximation of the solution and accurate prediction of the average behavior of the system.« less

  16. Accuracy and convergence of coupled finite-volume/Monte Carlo codes for plasma edge simulations of nuclear fusion reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghoos, K., E-mail: kristel.ghoos@kuleuven.be; Dekeyser, W.; Samaey, G.

    2016-10-01

    The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracymore » by making use of averaging in the Random Noise coupling technique.« less

  17. Evaluation of Traffic Density Parameters as an Indicator of Vehicle Emission-Related Near-Road Air Pollution: A Case Study with NEXUS Measurement Data on Black Carbon.

    PubMed

    Liu, Shi V; Chen, Fu-Lin; Xue, Jianping

    2017-12-15

    An important factor in evaluating health risk of near-road air pollution is to accurately estimate the traffic-related vehicle emission of air pollutants. Inclusion of traffic parameters such as road length/area, distance to roads, and traffic volume/intensity into models such as land use regression (LUR) models has improved exposure estimation. To better understand the relationship between vehicle emissions and near-road air pollution, we evaluated three traffic density-based indices: Major-Road Density (MRD), All-Traffic Density (ATD) and Heavy-Traffic Density (HTD) which represent the proportions of major roads, major road with annual average daily traffic (AADT), and major road with commercial annual average daily traffic (CAADT) in a buffered area, respectively. We evaluated the potential of these indices as vehicle emission-specific near-road air pollutant indicators by analyzing their correlation with black carbon (BC), a marker for mobile source air pollutants, using measurement data obtained from the Near-road Exposures and Effects of Urban Air Pollutants Study (NEXUS). The average BC concentrations during a day showed variations consistent with changes in traffic volume which were classified into high, medium, and low for the morning rush hours, the evening rush hours, and the rest of the day, respectively. The average correlation coefficients between BC concentrations and MRD, ATD, and HTD, were 0.26, 0.18, and 0.48, respectively, as compared with -0.31 and 0.25 for two commonly used traffic indicators: nearest distance to a major road and total length of the major road. HTD, which includes only heavy-duty diesel vehicles in its traffic count, gives statistically significant correlation coefficients for all near-road distances (50, 100, 150, 200, 250, and 300 m) that were analyzed. Generalized linear model (GLM) analyses show that season, traffic volume, HTD, and distance from major roads are highly related to BC measurements. Our analyses indicate that traffic density parameters may be more specific indicators of near-road BC concentrations for health risk studies. HTD is the best index for reflecting near-road BC concentrations which are influenced mainly by the emissions of heavy-duty diesel engines.

  18. Evaluation of Traffic Density Parameters as an Indicator of Vehicle Emission-Related Near-Road Air Pollution: A Case Study with NEXUS Measurement Data on Black Carbon

    PubMed Central

    Chen, Fu-Lin; Xue, Jianping

    2017-01-01

    An important factor in evaluating health risk of near-road air pollution is to accurately estimate the traffic-related vehicle emission of air pollutants. Inclusion of traffic parameters such as road length/area, distance to roads, and traffic volume/intensity into models such as land use regression (LUR) models has improved exposure estimation. To better understand the relationship between vehicle emissions and near-road air pollution, we evaluated three traffic density-based indices: Major-Road Density (MRD), All-Traffic Density (ATD) and Heavy-Traffic Density (HTD) which represent the proportions of major roads, major road with annual average daily traffic (AADT), and major road with commercial annual average daily traffic (CAADT) in a buffered area, respectively. We evaluated the potential of these indices as vehicle emission-specific near-road air pollutant indicators by analyzing their correlation with black carbon (BC), a marker for mobile source air pollutants, using measurement data obtained from the Near-road Exposures and Effects of Urban Air Pollutants Study (NEXUS). The average BC concentrations during a day showed variations consistent with changes in traffic volume which were classified into high, medium, and low for the morning rush hours, the evening rush hours, and the rest of the day, respectively. The average correlation coefficients between BC concentrations and MRD, ATD, and HTD, were 0.26, 0.18, and 0.48, respectively, as compared with −0.31 and 0.25 for two commonly used traffic indicators: nearest distance to a major road and total length of the major road. HTD, which includes only heavy-duty diesel vehicles in its traffic count, gives statistically significant correlation coefficients for all near-road distances (50, 100, 150, 200, 250, and 300 m) that were analyzed. Generalized linear model (GLM) analyses show that season, traffic volume, HTD, and distance from major roads are highly related to BC measurements. Our analyses indicate that traffic density parameters may be more specific indicators of near-road BC concentrations for health risk studies. HTD is the best index for reflecting near-road BC concentrations which are influenced mainly by the emissions of heavy-duty diesel engines. PMID:29244754

  19. SToRM: A Model for 2D environmental hydraulics

    USGS Publications Warehouse

    Simões, Francisco J. M.

    2017-01-01

    A two-dimensional (depth-averaged) finite volume Godunov-type shallow water model developed for flow over complex topography is presented. The model, SToRM, is based on an unstructured cell-centered finite volume formulation and on nonlinear strong stability preserving Runge-Kutta time stepping schemes. The numerical discretization is founded on the classical and well established shallow water equations in hyperbolic conservative form, but the convective fluxes are calculated using auto-switching Riemann and diffusive numerical fluxes. Computational efficiency is achieved through a parallel implementation based on the OpenMP standard and the Fortran programming language. SToRM’s implementation within a graphical user interface is discussed. Field application of SToRM is illustrated by utilizing it to estimate peak flow discharges in a flooding event of the St. Vrain Creek in Colorado, U.S.A., in 2013, which reached 850 m3/s (~30,000 f3 /s) at the location of this study.

  20. Computational analysis of species transport and electrochemical characteristics of a MOLB-type SOFC

    NASA Astrophysics Data System (ADS)

    Hwang, J. J.; Chen, C. K.; Lai, D. Y.

    A multi-physics model coupling electrochemical kinetics with fluid dynamics has been developed to simulate the transport phenomena in mono-block-layer built (MOLB) solid oxide fuel cells (SOFC). A typical MOLB module is composed of trapezoidal flow channels, corrugated positive electrode-electrolyte-negative electrode (PEN) plates, and planar inter-connecters. The control volume-based finite difference method is employed for calculation, which is based on the conservation of mass, momentum, energy, species, and electric charge. In the porous electrodes, the flow momentum is governed by a Darcy model with constant porosity and permeability. The diffusion of reactants follows the Bruggman model. The chemistry within the plates is described via surface reactions with a fixed surface-to-volume ratio, tortuosity and average pore size. Species transports as well as the local variations of electrochemical characteristics, such as overpotential and current density distributions in the electrodes of an MOLB SOFC, are discussed in detail.

  1. Microstructure and mesh sensitivities of mesoscale surrogate driving force measures for transgranular fatigue cracks in polycrystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castelluccio, Gustavo M.; McDowell, David L.

    The number of cycles required to form and grow microstructurally small fatigue cracks in metals exhibits substantial variability, particularly for low applied strain amplitudes. This variability is commonly attributed to the heterogeneity of cyclic plastic deformation within the microstructure, and presents a challenge to minimum life design of fatigue resistant components. Our paper analyzes sources of variability that contribute to the driving force of transgranular fatigue cracks within nucleant grains. We also employ crystal plasticity finite element simulations that explicitly render the polycrystalline microstructure and Fatigue Indicator Parameters (FIPs) averaged over different volume sizes and shapes relative to the anticipatedmore » fatigue damage process zone. Volume averaging is necessary to both achieve description of a finite fatigue damage process zone and to regularize mesh dependence in simulations. Furthermore, results from constant amplitude remote applied straining are characterized in terms of the extreme value distributions of volume averaged FIPs. Grain averaged FIP values effectively mitigate mesh sensitivity, but they smear out variability within grains. Furthermore, volume averaging over bands that encompass critical transgranular slip planes appear to present the most attractive approach to mitigate mesh sensitivity while preserving variability within grains.« less

  2. Microstructure and mesh sensitivities of mesoscale surrogate driving force measures for transgranular fatigue cracks in polycrystals

    DOE PAGES

    Castelluccio, Gustavo M.; McDowell, David L.

    2015-05-22

    The number of cycles required to form and grow microstructurally small fatigue cracks in metals exhibits substantial variability, particularly for low applied strain amplitudes. This variability is commonly attributed to the heterogeneity of cyclic plastic deformation within the microstructure, and presents a challenge to minimum life design of fatigue resistant components. Our paper analyzes sources of variability that contribute to the driving force of transgranular fatigue cracks within nucleant grains. We also employ crystal plasticity finite element simulations that explicitly render the polycrystalline microstructure and Fatigue Indicator Parameters (FIPs) averaged over different volume sizes and shapes relative to the anticipatedmore » fatigue damage process zone. Volume averaging is necessary to both achieve description of a finite fatigue damage process zone and to regularize mesh dependence in simulations. Furthermore, results from constant amplitude remote applied straining are characterized in terms of the extreme value distributions of volume averaged FIPs. Grain averaged FIP values effectively mitigate mesh sensitivity, but they smear out variability within grains. Furthermore, volume averaging over bands that encompass critical transgranular slip planes appear to present the most attractive approach to mitigate mesh sensitivity while preserving variability within grains.« less

  3. Optical Coherence Tomography Based Estimates of Crystalline Lens Volume, Equatorial Diameter, and Plane Position.

    PubMed

    Martinez-Enriquez, Eduardo; Sun, Mengchan; Velasco-Ocana, Miriam; Birkenfeld, Judith; Pérez-Merino, Pablo; Marcos, Susana

    2016-07-01

    Measurement of crystalline lens geometry in vivo is critical to optimize performance of state-of-the-art cataract surgery. We used custom-developed quantitative anterior segment optical coherence tomography (OCT) and developed dedicated algorithms to estimate lens volume (VOL), equatorial diameter (DIA), and equatorial plane position (EPP). The method was validated ex vivo in 27 human donor (19-71 years of age) lenses, which were imaged in three-dimensions by OCT. In vivo conditions were simulated assuming that only the information within a given pupil size (PS) was available. A parametric model was used to estimate the whole lens shape from PS-limited data. The accuracy of the estimated lens VOL, DIA, and EPP was evaluated by comparing estimates from the whole lens data and PS-limited data ex vivo. The method was demonstrated in vivo using 2 young eyes during accommodation and 2 cataract eyes. Crystalline lens VOL was estimated within 96% accuracy (average estimation error across lenses ± standard deviation: 9.30 ± 7.49 mm3). Average estimation errors in EPP were below 40 ± 32 μm, and below 0.26 ± 0.22 mm in DIA. Changes in lens VOL with accommodation were not statistically significant (2-way ANOVA, P = 0.35). In young eyes, DIA decreased and EPP increased statistically significantly with accommodation (P < 0.001) by 0.14 mm and 0.13 mm, respectively, on average across subjects. In cataract eyes, VOL = 205.5 mm3, DIA = 9.57 mm, and EPP = 2.15 mm on average. Quantitative OCT with dedicated image processing algorithms allows estimation of human crystalline lens volume, diameter, and equatorial lens position, as validated from ex vivo measurements, where entire lens images are available.

  4. Simulated Long-term Effects of the MOFEP Cutting Treatments

    Treesearch

    David R. Larsen

    1997-01-01

    Changes in average basal area and volume per acre were simulated for a 35-year pertod using the treatments designated for sites 4, 5, and 6 of the Missouri Ozark Forest Ecosystem Project. A traditional growth and yield model (Central States TWIGS variant of the Forest Vegetation Simulator) was used with Landscape Management System Software to simulate and display...

  5. Finite Element Methods and Multiphase Continuum Theory for Modeling 3D Air-Water-Sediment Interactions

    NASA Astrophysics Data System (ADS)

    Kees, C. E.; Miller, C. T.; Dimakopoulos, A.; Farthing, M.

    2016-12-01

    The last decade has seen an expansion in the development and application of 3D free surface flow models in the context of environmental simulation. These models are based primarily on the combination of effective algorithms, namely level set and volume-of-fluid methods, with high-performance, parallel computing. These models are still computationally expensive and suitable primarily when high-fidelity modeling near structures is required. While most research on algorithms and implementations has been conducted in the context of finite volume methods, recent work has extended a class of level set schemes to finite element methods on unstructured methods. This work considers models of three-phase flow in domains containing air, water, and granular phases. These multi-phase continuum mechanical formulations show great promise for applications such as analysis of coastal and riverine structures. This work will consider formulations proposed in the literature over the last decade as well as new formulations derived using the thermodynamically constrained averaging theory, an approach to deriving and closing macroscale continuum models for multi-phase and multi-component processes. The target applications require the ability to simulate wave breaking and structure over-topping, particularly fully three-dimensional, non-hydrostatic flows that drive these phenomena. A conservative level set scheme suitable for higher-order finite element methods is used to describe the air/water phase interaction. The interaction of these air/water flows with granular materials, such as sand and rubble, must also be modeled. The range of granular media dynamics targeted including flow and wave transmision through the solid media as well as erosion and deposition of granular media and moving bed dynamics. For the granular phase we consider volume- and time-averaged continuum mechanical formulations that are discretized with the finite element method and coupled to the underlying air/water flow via operator splitting (fractional step) schemes. Particular attention will be given to verification and validation of the numerical model and important qualitative features of the numerical methods including phase conservation, wave energy dissipation, and computational efficiency in regimes of interest.

  6. Accelerated Brain DCE-MRI Using Iterative Reconstruction With Total Generalized Variation Penalty for Quantitative Pharmacokinetic Analysis: A Feasibility Study.

    PubMed

    Wang, Chunhao; Yin, Fang-Fang; Kirkpatrick, John P; Chang, Zheng

    2017-08-01

    To investigate the feasibility of using undersampled k-space data and an iterative image reconstruction method with total generalized variation penalty in the quantitative pharmacokinetic analysis for clinical brain dynamic contrast-enhanced magnetic resonance imaging. Eight brain dynamic contrast-enhanced magnetic resonance imaging scans were retrospectively studied. Two k-space sparse sampling strategies were designed to achieve a simulated image acquisition acceleration factor of 4. They are (1) a golden ratio-optimized 32-ray radial sampling profile and (2) a Cartesian-based random sampling profile with spatiotemporal-regularized sampling density constraints. The undersampled data were reconstructed to yield images using the investigated reconstruction technique. In quantitative pharmacokinetic analysis on a voxel-by-voxel basis, the rate constant K trans in the extended Tofts model and blood flow F B and blood volume V B from the 2-compartment exchange model were analyzed. Finally, the quantitative pharmacokinetic parameters calculated from the undersampled data were compared with the corresponding calculated values from the fully sampled data. To quantify each parameter's accuracy calculated using the undersampled data, error in volume mean, total relative error, and cross-correlation were calculated. The pharmacokinetic parameter maps generated from the undersampled data appeared comparable to the ones generated from the original full sampling data. Within the region of interest, most derived error in volume mean values in the region of interest was about 5% or lower, and the average error in volume mean of all parameter maps generated through either sampling strategy was about 3.54%. The average total relative error value of all parameter maps in region of interest was about 0.115, and the average cross-correlation of all parameter maps in region of interest was about 0.962. All investigated pharmacokinetic parameters had no significant differences between the result from original data and the reduced sampling data. With sparsely sampled k-space data in simulation of accelerated acquisition by a factor of 4, the investigated dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic parameters can accurately estimate the total generalized variation-based iterative image reconstruction method for reliable clinical application.

  7. Revised Calculated Volumes Of Individual Shield Volcanoes At The Young End Of The Hawaiian Ridge

    NASA Astrophysics Data System (ADS)

    Robinson, J. E.; Eakins, B. W.

    2003-12-01

    Recent, high-resolution multibeam bathymetry and a digital elevation model of the Hawaiian Islands allow us to recalculate Bargar and Jackson's [1974] volumes of coalesced volcanic edifices (Hawaii, Maui-Nui, Oahu, Kauai, and Niihau) and individual shield volcanoes at the young end of the Hawaiian Ridge, taking into account subsidence of the Pacific plate under the load of the volcanoes as modeled by Watts and ten Brink [1989]. Our volume for the Island of Hawaii (2.48 x105 km3) is twice the previous estimate (1.13 x105 km3), due primarily to crustal subsidence, which had not been accounted for in the earlier work. The volcanoes that make up the Hawaii edifice (Mahukona, Kohala, Mauna Kea, Hualalai, Mauna Loa, Kilauea, and Loihi) are generally considered to have formed within the past million years and our revised volume for Hawaii indicates that either magma-supply rates are greater than previously estimated (0.25 km3/yr as opposed to 0.1 km3/yr) or that Hawaii's volcanoes have erupted over a longer period of time (>1 million years). Our results also indicate that magma supply rates have increased dramatically to build the Hawaiian edifices: the average rate of the past 5 million years (0.096 km3/yr) is substantially greater than the overall average of the Hawaiian Ridge (0.018km3/yr) or Emperor Seamounts (0.012 km3/yr) as calculated by Bargar and Jackson, and that rates within the past million years are greater still (0.25 km3/yr). References: Bargar, K. E., and Jackson, E. D., 1974, Calculated volumes of individual shield volcanoes along the Hawaiian-Emperor Chain, Jour. Research U.S. Geol. Survey, Vol. 2, No. 5, p. 545-550. Watts, A. B., and ten Brink, U. S., 1989, Crustal structure, flexure, and subsidence history of the Hawaiian Islands, Jour. Geophys. Res., Vol. 94, No. B8, p. 10,473-10,500.

  8. Quantifying sediment connectivity in an actively eroding gully complex, Waipaoa catchment, New Zealand

    NASA Astrophysics Data System (ADS)

    Taylor, Richard J.; Massey, Chris; Fuller, Ian C.; Marden, Mike; Archibald, Garth; Ries, William

    2018-04-01

    Using a combination of airborne LiDAR (2005) and terrestrial laser scanning (2007, 2008, 2010, 2011), sediment delivery processes and sediment connectivity in an 20-ha gully complex, which significantly contributes to the Waipaoa sediment cascade, are quantified over a 6-year period. The acquisition of terrain data from high-resolution surveys of the whole gully-fan system provides new insights into slope processes and slope-channel linkages operating in the complex. Raw terrain data from the airborne and ground-based laser scans were converted into raster DEMs with a vertical accuracy between surveys of <±0.1 m. Grid elevations in each successive DEM were subtracted from the previous DEM to provide models of change across the gully and fan complex. In these models deposition equates to positive and erosion to negative vertical change. Debris flows, slumping, and erosion by surface runoff (gullying in the conventional sense) generated on average 95,232 m3 of sediment annually, with a standard deviation of ± 20,806 m3. The volumes of debris eroded from those areas dominated by surface erosion processes were higher than in areas dominated by landslide processes. Over the six-year study period, sediment delivery from the source zones to the fan was a factor of 1.4 times larger than the volume of debris exported from the fan into Te Weraroa Stream. The average annual volume of sediment exported to Te Weraroa Stream varies widely from 23,195 to 102,796 m3. Fluctuations in the volume of stored sediment within the fan, rather than external forcing by rainstorms or earthquakes, account for this annual variation. No large rainfall events occurred during the monitoring period; therefore, sediment volumes and transfer processes captured by this study are representative of the background conditions that operate in this geomorphic system.

  9. Mixed sand and gravel beaches: accurate measurement of active layer depth and sediment transport volumes using PIT tagged tracer pebbles

    NASA Astrophysics Data System (ADS)

    Holland, A.; Moses, C.; Sear, D. A.; Cope, S.

    2016-12-01

    As sediments containing significant gravel portions are increasingly used for beach replenishment projects globally, the total number of beaches classified as `mixed sand and gravel' (MSG) increases. Calculations for required replenishment sediment volumes usually assume a uniform layer of sediment transport across and along the beach, but research into active layer (AL) depth has shown variations both across shore and according to sediment size distribution. This study addresses the need for more accurate calculations of sediment transport volumes on MSG beaches by using more precise measurements of AL depth and width, and virtual velocity of tracer pebbles. Variations in AL depth were measured along three main profile lines (from MHWS to MLWN) at Eastoke, Hayling Island (Hampshire, UK). Passive Integrated Transponder (PIT) tagged pebbles were deployed in columns, and their new locations repeatedly surveyed with RFID technology. These data were combined with daily dGPS beach profiles and sediment sampling for detailed analysis of the influence of beach morphodynamics on sediment transport volumes. Data were collected over two consecutive winter seasons: 2014-15 (relatively calm, average wave height <1 m) and 2015-16 (prolonged periods of moderate storminess, wave heights of 1-2 m). The active layer was, on average, 22% of wave height where beach slope (tanβ) is 0.1, with variations noted according to slope angle, sediment distribution, and beach groundwater level. High groundwater levels and a change in sediment proportions in the sandy lower foreshore reduced the AL to 10% of wave height in this area. The disparity in AL depth across the beach profile indicates that traditional models are not accurately representing bulk sediment transport on MSG beaches. It is anticipated that by improving model inputs, beach managers will be better able to predict necessary volumes and sediment grain size proportions of replenishment material for effective management of MSG beaches.

  10. Quantitative tomographic measurements of opaque multiphase flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    GEORGE,DARIN L.; TORCZYNSKI,JOHN R.; SHOLLENBERGER,KIM ANN

    2000-03-01

    An electrical-impedance tomography (EIT) system has been developed for quantitative measurements of radial phase distribution profiles in two-phase and three-phase vertical column flows. The EIT system is described along with the computer algorithm used for reconstructing phase volume fraction profiles. EIT measurements were validated by comparison with a gamma-densitometry tomography (GDT) system. The EIT system was used to accurately measure average solid volume fractions up to 0.05 in solid-liquid flows, and radial gas volume fraction profiles in gas-liquid flows with gas volume fractions up to 0.15. In both flows, average phase volume fractions and radial volume fraction profiles from GDTmore » and EIT were in good agreement. A minor modification to the formula used to relate conductivity data to phase volume fractions was found to improve agreement between the methods. GDT and EIT were then applied together to simultaneously measure the solid, liquid, and gas radial distributions within several vertical three-phase flows. For average solid volume fractions up to 0.30, the gas distribution for each gas flow rate was approximately independent of the amount of solids in the column. Measurements made with this EIT system demonstrate that EIT may be used successfully for noninvasive, quantitative measurements of dispersed multiphase flows.« less

  11. Coupled Human-Environment Dynamics of Forest Pest Spread and Control in a Multi-Patch, Stochastic Setting

    PubMed Central

    Ali, Qasim; Bauch, Chris T.; Anand, Madhur

    2015-01-01

    Background The transportation of camp firewood infested by non-native forest pests such as Asian long-horned beetle (ALB) and emerald ash borer (EAB) has severe impacts on North American forests. Once invasive forest pests are established, it can be difficult to eradicate them. Hence, preventing the long-distance transport of firewood by individuals is crucial. Methods Here we develop a stochastic simulation model that captures the interaction between forest pest infestations and human decisions regarding firewood transportation. The population of trees is distributed across 10 patches (parks) comprising a “low volume” partition of 5 patches that experience a low volume of park visitors, and a “high volume” partition of 5 patches experiencing a high visitor volume. The infestation spreads within a patch—and also between patches—according to the probability of between-patch firewood transportation. Individuals decide to transport firewood or buy it locally based on the costs of locally purchased versus transported firewood, social norms, social learning, and level of concern for observed infestations. Results We find that the average time until a patch becomes infested depends nonlinearly on many model parameters. In particular, modest increases in the tree removal rate, modest increases in public concern for infestation, and modest decreases in the cost of locally purchased firewood, relative to baseline (current) values, cause very large increases in the average time until a patch becomes infested due to firewood transport from other patches, thereby better preventing long-distance spread. Patches that experience lower visitor volumes benefit more from firewood movement restrictions than patches that experience higher visitor volumes. Also, cross–patch infestations not only seed new infestations, they can also worsen existing infestations to a surprising extent: long-term infestations are more intense in the high volume patches than the low volume patches, even when infestation is already endemic everywhere. Conclusions The success of efforts to prevent long-distance spread of forest pests may depend sensitively on the interaction between outbreak dynamics and human social processes, with similar levels of effort producing very different outcomes depending on where the coupled human and natural system exists in parameter space. Further development of such modeling approaches through better empirical validation should yield more precise recommendations for ways to optimally prevent the long-distance spread of invasive forest pests. PMID:26430902

  12. Discreteness-induced concentration inversion in mesoscopic chemical systems.

    PubMed

    Ramaswamy, Rajesh; González-Segredo, Nélido; Sbalzarini, Ivo F; Grima, Ramon

    2012-04-10

    Molecular discreteness is apparent in small-volume chemical systems, such as biological cells, leading to stochastic kinetics. Here we present a theoretical framework to understand the effects of discreteness on the steady state of a monostable chemical reaction network. We consider independent realizations of the same chemical system in compartments of different volumes. Rate equations ignore molecular discreteness and predict the same average steady-state concentrations in all compartments. However, our theory predicts that the average steady state of the system varies with volume: if a species is more abundant than another for large volumes, then the reverse occurs for volumes below a critical value, leading to a concentration inversion effect. The addition of extrinsic noise increases the size of the critical volume. We theoretically predict the critical volumes and verify, by exact stochastic simulations, that rate equations are qualitatively incorrect in sub-critical volumes.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grove, John W.

    We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.

  14. Laser power conversion system analysis, volume 1

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-orbit laser energy conversion system analysis established a mission model of satellites with various orbital parameters and average electrical power requirements ranging from 1 to 300 kW. The system analysis evaluated various conversion techniques, power system deployment parameters, power system electrical supplies and other critical supplies and other critical subsystems relative to various combinations of the mission model. The analysis show that the laser power system would not be competitive with current satellite power systems from weight, cost and development risk standpoints.

  15. Investigation of Micro- and Nanosized Particle Erosion in a 90° Pipe Bend Using a Two-Phase Discrete Phase Model

    PubMed Central

    Safaei, M. R.; Mahian, O.; Garoosi, F.; Hooman, K.; Karimipour, A.; Kazi, S. N.; Gharehkhani, S.

    2014-01-01

    This paper addresses erosion prediction in 3-D, 90° elbow for two-phase (solid and liquid) turbulent flow with low volume fraction of copper. For a range of particle sizes from 10 nm to 100 microns and particle volume fractions from 0.00 to 0.04, the simulations were performed for the velocity range of 5–20 m/s. The 3-D governing differential equations were discretized using finite volume method. The influences of size and concentration of micro- and nanoparticles, shear forces, and turbulence on erosion behavior of fluid flow were studied. The model predictions are compared with the earlier studies and a good agreement is found. The results indicate that the erosion rate is directly dependent on particles' size and volume fraction as well as flow velocity. It has been observed that the maximum pressure has direct relationship with the particle volume fraction and velocity but has a reverse relationship with the particle diameter. It also has been noted that there is a threshold velocity as well as a threshold particle size, beyond which significant erosion effects kick in. The average friction factor is independent of the particle size and volume fraction at a given fluid velocity but increases with the increase of inlet velocities. PMID:25379542

  16. RBF kernel based support vector regression to estimate the blood volume and heart rate responses during hemodialysis.

    PubMed

    Javed, Faizan; Chan, Gregory S H; Savkin, Andrey V; Middleton, Paul M; Malouf, Philip; Steel, Elizabeth; Mackie, James; Lovell, Nigel H

    2009-01-01

    This paper uses non-linear support vector regression (SVR) to model the blood volume and heart rate (HR) responses in 9 hemodynamically stable kidney failure patients during hemodialysis. Using radial bias function (RBF) kernels the non-parametric models of relative blood volume (RBV) change with time as well as percentage change in HR with respect to RBV were obtained. The e-insensitivity based loss function was used for SVR modeling. Selection of the design parameters which includes capacity (C), insensitivity region (e) and the RBF kernel parameter (sigma) was made based on a grid search approach and the selected models were cross-validated using the average mean square error (AMSE) calculated from testing data based on a k-fold cross-validation technique. Linear regression was also applied to fit the curves and the AMSE was calculated for comparison with SVR. For the model based on RBV with time, SVR gave a lower AMSE for both training (AMSE=1.5) as well as testing data (AMSE=1.4) compared to linear regression (AMSE=1.8 and 1.5). SVR also provided a better fit for HR with RBV for both training as well as testing data (AMSE=15.8 and 16.4) compared to linear regression (AMSE=25.2 and 20.1).

  17. Impact of removed tumor volume and location on patient outcome in glioblastoma.

    PubMed

    Awad, Al-Wala; Karsy, Michael; Sanai, Nader; Spetzler, Robert; Zhang, Yue; Xu, Yizhe; Mahan, Mark A

    2017-10-01

    Glioblastoma is an aggressive primary brain tumor with devastatingly poor prognosis. Multiple studies have shown the benefit of wider extent of resection (EOR) on patient overall survival (OS) and worsened survival with larger preoperative tumor volumes. However, the concomitant impact of postoperative tumor volume and eloquent location on OS has yet to be fully evaluated. We performed a retrospective chart review of adult patients treated for glioblastoma from January 2006 through December 2011. Adherence to standardized postoperative chemoradiation protocols was used as an inclusion criterion. Detailed volumetric and location analysis was performed on immediate preoperative and immediate postoperative magnetic resonance imaging. Cox proportional hazard modeling approach was employed to explore the modifying effects of EOR and eloquent location after adjusting for various confounders and associated characteristics, such as preoperative tumor volume and demographics. Of the 471 screened patients, 141 were excluded because they did not meet all inclusion criteria. The mean (±SD) age of the remaining 330 patients (60.6% male) was 58.9 ± 12.9 years; the mean preoperative and postoperative Karnofsky performance scores (KPSs) were 76.2 ± 10.3 and 80.0 ± 16.6, respectively. Preoperative tumor volume averaged 33.2 ± 29.0 ml, postoperative residual was 4.0 ± 8.1 ml, and average EOR was 88.6 ± 17.6%. The observed average follow-up was 17.6 ± 15.7 months, and mean OS was 16.7 ± 14.4 months. Survival analysis showed significantly shorter survival for patients with lesions in periventricular (16.8 ± 1.7 vs. 21.5 ± 1.4 mo, p = 0.03), deep nuclei/basal ganglia (11.6 ± 1.7 vs. 20.6 ± 1.2, p = 0.002), and multifocal (12.0 ± 1.4 vs. 21.3 ± 1.3 months, p = 0.0001) locations, but no significant influence on survival was seen for eloquent cortex sites (p = 0.14, range 0.07-0.9 for all individual locations). OS significantly improved with EOR in univariate analysis, averaging 22.3, 19.7, and 13.2 months for >90, 80-90, and 70-80% resection, respectively. Survival was 22.8, 19.0, and 12.7 months for 0, 0-5, and 5-10 ml postoperative residual, respectively. A hazard model showed that larger preoperative tumor volume [hazard ratio (HR) 1.05, 95% CI 1.02-1.07], greater age (HR 1.02, 95% CI 1.01-1.03), multifocality (HR 1.44, 95% CI 1.01-2.04), and deep nuclei/basal ganglia (HR 2.05, CI 1.27-3.3) were the most predictive of poor survival after adjusting for KPS and tumor location. There was a negligible but significant interaction between EOR and preoperative tumor volume (HR 0.9995, 95% CI 0.9993-0.9998), but EOR alone did not correlate with OS after adjusting for other factors. The interaction between EOR and preoperative tumor volume represented tumor volume removed during surgery. In conclusion, EOR alone was not an important predictor of outcome during GBM treatment once preoperative tumor volume, age, and deep nuclei/basal ganglia location were factored. Instead, the interaction between EOR and preoperative volume, representing reduced disease burden, was an important predictor of reducing OS. Removal of tumor from eloquent cortex did not impact postoperative KPS. These results suggest aggressive surgical treatment to reduce postoperative residual while maintaining postoperative KPS may aid patient survival outcomes for a given tumor size and location.

  18. Regional white matter hyperintensity volume, not hippocampal atrophy, predicts incident Alzheimer’s disease in the community

    PubMed Central

    Brickman, Adam M.; Provenzano, Frank A.; Muraskin, Jordan; Manly, Jennifer J.; Blum, Sonja; Apa, Zoltan; Stern, Yaakov; Brown, Truman R.; Luchsinger, José A.; Mayeux, Richard

    2013-01-01

    Background New onset Alzheimer’s disease (AD) is often attributed to degenerative changes in the hippocampus. However, the contribution of regionally distributed small vessel cerebrovascular disease, visualized as white matter hyperintensities (WMH) on MRI, remains unclear. Objective To determine whether regional WMH and hippocampal volume predict incident AD in an epidemiological study. Design A longitudinal community-based epidemiological study of older adults from northern Manhattan. Setting The Washington Heights/Inwood Columbia Aging Project Participants Between 2005 and 2007, 717 non-demented participants received MRI scans. An average of 40.28 (SD=9.77) months later, 503 returned for follow-up clinical examination and 46 met criteria for incident dementia (45 with AD). Regional WMH and relative hippocampal volumes were derived. Three Cox proportional hazards models were run to predict incident dementia, controlling for relevant variables. The first included all WMH measurements; the second included relative hippocampal volume; and the third combined the two measurements. Main outcome measures Incident Alzheimer’s disease. Results White matter hyperintensity volume in the parietal lobe predicted time to incident dementia (HR=1.194, p=0.031). Relative hippocampal volume did not predict incident dementia when considered alone (HR=0.419, p=0.768) or with the WMH measures included in the model (HR=0.302, p=0.701). Including hippocampal volume in the model did not notably alter the predictive utility of parietal lobe WMH (HR=1.197, p=0.049). Conclusion The findings highlight the regional specificity of the association of WMH with AD. It is not clear whether parietal WMH solely represent a marker for cerebrovascular burden or point to distinct injury compared to other regions. Future work should elucidate pathogenic mechanisms linking WMH and AD pathology. PMID:22945686

  19. Comparing CT perfusion with oxygen partial pressure in a rabbit VX2 soft-tissue tumor model.

    PubMed

    Sun, Chang-Jin; Li, Chao; Lv, Hai-Bo; Zhao, Cong; Yu, Jin-Ming; Wang, Guang-Hui; Luo, Yun-Xiu; Li, Yan; Xiao, Mingyong; Yin, Jun; Lang, Jin-Yi

    2014-01-01

    The aim of this study was to evaluate the oxygen partial pressure of the rabbit model of the VX2 tumor using a 64-slice perfusion CT and to compare the results with that obtained using the oxygen microelectrode method. Perfusion CT was performed for 45 successfully constructed rabbit models of a VX2 brain tumor. The perfusion values of the brain tumor region of interest, the blood volume (BV), the time to peak (TTP) and the peak enhancement intensity (PEI) were measured. The results were compared with the partial pressure of oxygen (PO2) of that region of interest obtained using the oxygen microelectrode method. The perfusion values of the brain tumor region of interest in 45 successfully constructed rabbit models of a VX2 brain tumor ranged from 1.3-127.0 (average, 21.1 ± 26.7 ml/min/ml); BV ranged from 1.2-53.5 ml/100g (average, 22.2 ± 13.7 ml/100g); PEI ranged from 8.7-124.6 HU (average, 43.5 ± 28.7 HU); and TTP ranged from 8.2-62.3 s (average, 38.8 ± 14.8 s). The PO2 in the corresponding region ranged from 0.14-47 mmHg (average, 16 ± 14.8 mmHg). The perfusion CT positively correlated with the tumor PO2, which can be used for evaluating the tumor hypoxia in clinical practice.

  20. Produced Water Treatment Using Geothermal Energy from Oil and Gas Wells: An Appropriateness of Decommissioned Wells Index (ADWI) Approach

    NASA Astrophysics Data System (ADS)

    Kiaghadi, A.; Rifai, H. S.

    2016-12-01

    This study investigated the feasibility of harnessing geothermal energy from retrofitted oil and gas decommissioned wells to power desalination units and overcome the produced water treatment energy barrier. Previous studies using heat transfer models have indicated that well depth, geothermal gradient, formation heat conductivity, and produced water salt levels were the most important constraints that affect the achievable volume of treated water. Thus, the challenge of identifying which wells would be best suited for retrofit as geothermal wells was addressed by defining an Appropriateness of Decommissioned Wells Index (ADWI) using a 25 km x 25 km grid over Texas. Heat transfer modeling combined with fuzzy logic methodology were used to estimate the ADWI at each grid cell using the scale of Very Poor, Poor, Average, Good and Excellent. Values for each of the four constraints were extracted from existing databases and were used to select 20 representative values that covered the full range of the data. A heat transfer model was run for all the 160,000 possible combination scenarios and the results were regressed to estimate weighting coefficients that indicate the relative effect of well depth, geothermal gradient, heat conductivity, and produced water salt levels on the volume of treated water in Texas. The results indicated that wells located in cells with ADWI of "Average", "Good" or "Excellent" can potentially deliver 35,000, 106,000, or 240,000 L/day of treated water, respectively. Almost 98% of the cells in the Granite Wash, 97% in Eagle Ford Shale, 90% in Haynesville Shale, 79% in Permian Basin, and 78% in Barnett Shale were identified as better than "Average" locations; whereas, south of the Eagle Ford, southwestern Permian Basin, and the center of Granite Wash were "Excellent". Importantly, most of the locations with better than "Average" ADWI are within drought prone agricultural regions that would benefit from this resilient source of clean water.

  1. Global and regional annual brain volume loss rates in physiological aging.

    PubMed

    Schippling, Sven; Ostwaldt, Ann-Christin; Suppa, Per; Spies, Lothar; Manogaran, Praveena; Gocke, Carola; Huppertz, Hans-Jürgen; Opfer, Roland

    2017-03-01

    The objective is to estimate average global and regional percentage brain volume loss per year (BVL/year) of the physiologically ageing brain. Two independent, cross-sectional single scanner cohorts of healthy subjects were included. The first cohort (n = 248) was acquired at the Medical Prevention Center (MPCH) in Hamburg, Germany. The second cohort (n = 316) was taken from the Open Access Series of Imaging Studies (OASIS). Brain parenchyma (BP), grey matter (GM), white matter (WM), corpus callosum (CC), and thalamus volumes were calculated. A non-parametric technique was applied to fit the resulting age-volume data. For each age, the BVL/year was derived from the age-volume curves. The resulting BVL/year curves were compared between the two cohorts. For the MPCH cohort, the BVL/year curve of the BP was an increasing function starting from 0.20% at the age of 35 years increasing to 0.52% at 70 years (corresponding values for GM ranged from 0.32 to 0.55%, WM from 0.02 to 0.47%, CC from 0.07 to 0.48%, and thalamus from 0.25 to 0.54%). Mean absolute difference between BVL/year trajectories across the age range of 35-70 years was 0.02% for BP, 0.04% for GM, 0.04% for WM, 0.11% for CC, and 0.02% for the thalamus. Physiological BVL/year rates were remarkably consistent between the two cohorts and independent from the scanner applied. Average BVL/year was clearly age and compartment dependent. These results need to be taken into account when defining cut-off values for pathological annual brain volume loss in disease models, such as multiple sclerosis.

  2. Predicting relationship between magnetostriction and applied field of magnetostrictive composites

    NASA Astrophysics Data System (ADS)

    Guan, Xinchun; Dong, Xufeng; Ou, Jinping

    2008-03-01

    Consideration of demagnetization effect, the model used to calculate the magnetostriction of single particle under the applied field is firstly built up. Then treating the magnetostriction particulate as an eigenstrain, based on Eshelby equivalent inclusion and Mori-Tanaka method, the approach to calculate average magnetostriction of the composites under any applied field as well as saturation is studied. Results calculated by the approach indicate that saturation magnetostriction of magnetostrictive composites increases with increasing of particle aspect, particle volume fraction and decreasing of Young' modulus of matrix, and the influence of applied field on magnetostriction of the composites becomes more significant with larger particle volume fraction or particle aspect.

  3. Structure of insoluble immune complexes as studied by spectroturbidimetry and dynamic light scattering

    NASA Astrophysics Data System (ADS)

    Khlebtsov, Boris N.; Burygin, Gennadii L.; Matora, Larisa Y.; Shchyogolev, Sergei Y.; Khlebtsov, Nikolai G.

    2004-07-01

    We describe two variants of a method for determining the average composition of insoluble immune complex particles (IICP). The first variant is based on measuring the specific turbidity (the turbidity per unit mass concentration of the dispersed substance) and the average size of IICP determined from dynamic light scattering (DLS). In the second variant, the wavelength exponent (i.e., the slope of the logarithmic turbidity spectrum) is used in combination with specific turbidity measurements. Both variants allow the average biopolymer volume fraction to be determined in terms of the average refractive index of IICP. The method is exemplified by two experimental antigen+antibody systems: (i) lipopolysaccharide-protein complex (LPPC) of Azospirillum brasilense Sp245+rabbit anti-LPPC; and (ii) human IgG (hIgG)+sheep anti-hIgG. Our measurements by the two methods for both types of systems gave, on the average, the same result: the volume fraction of the IICP biopolymers is about 30%; accordingly, the volume fraction of buffer solvent is 70%.

  4. Computational Flow Modeling of Hydrodynamics in Multiphase Trickle-Bed Reactors

    NASA Astrophysics Data System (ADS)

    Lopes, Rodrigo J. G.; Quinta-Ferreira, Rosa M.

    2008-05-01

    This study aims to incorporate most recent multiphase models in order to investigate the hydrodynamic behavior of a TBR in terms of pressure drop and liquid holdup. Taking into account transport phenomena such as mass and heat transfer, an Eulerian k-fluid model was developed resulting from the volume averaging of the continuity and momentum equations and solved for a 3D representation of the catalytic bed. Computational fluid dynamics (CFD) model predicts hydrodynamic parameters quite well if good closures for fluid/fluid and fluid/particle interactions are incorporated in the multiphase model. Moreover, catalytic performance is investigated with the catalytic wet oxidation of a phenolic pollutant.

  5. Quantitation of mandibular ramus volume as a source of bone grafting.

    PubMed

    Verdugo, Fernando; Simonian, Krikor; Smith McDonald, Roberto; Nowzari, Hessam

    2009-10-01

    When alveolar atrophy impairs dental implant placement, ridge augmentation using mandibular ramus graft may be considered. In live patients, however, an accurate calculation of the amount of bone that can be safely harvested from the ramus has not been reported. The use of a software program to perform these calculations can aid in preventing surgical complications. The aim of the present study was to intra-surgically quantify the volume of the ramus bone graft that can be safely harvested in live patients, and compare it to presurgical computerized tomographic calculations. The AutoCAD software program quantified ramus bone graft in 40 consecutive patients from computerized tomographies. Direct intra-surgical measurements were recorded thereafter and compared to software data (n = 10). In these 10 patients, the bone volume was also measured at the recipient sites 6 months post-sinus augmentation. The mandibular second and third molar areas provided the thickest cortical graft averaging 2.8 +/- 0.6 mm. The thinnest bone was immediately posterior to the third molar (1.9 +/- 0.3 mm). The volume of ramus bone graft measured by AutoCAD averaged 0.8 mL (standard deviation [SD] 0.2 mL, range: 0.4-1.2 mL). The volume of bone graft measured intra-surgically averaged 2.5 mL (SD 0.4 mL, range: 1.8-3.0 mL). The difference between the two measurement methods was significant (p < 0.001). The bone volume measured 6 months post-sinus augmentation averaged 2.2 mL (SD 0.4 mL, range: 1.6-2.8 mL) with a mean loss of 0.3 mL in volume. The mandibular second molar area provided the thickest cortical graft. A cortical plate of 2.8 mm in average at combined second and third molar areas provided 2.5 mL particulated volume. The use of a design software program can improve surgical treatment planning prior to ramus bone grafting. The AutoCAD software program did not overestimate the volume of bone that can be safely harvested from the mandibular ramus.

  6. Incorporating GIS building data and census housing statistics for sub-block-level population estimation

    USGS Publications Warehouse

    Wu, S.-S.; Wang, L.; Qiu, X.

    2008-01-01

    This article presents a deterministic model for sub-block-level population estimation based on the total building volumes derived from geographic information system (GIS) building data and three census block-level housing statistics. To assess the model, we generated artificial blocks by aggregating census block areas and calculating the respective housing statistics. We then applied the model to estimate populations for sub-artificial-block areas and assessed the estimates with census populations of the areas. Our analyses indicate that the average percent error of population estimation for sub-artificial-block areas is comparable to those for sub-census-block areas of the same size relative to associated blocks. The smaller the sub-block-level areas, the higher the population estimation errors. For example, the average percent error for residential areas is approximately 0.11 percent for 100 percent block areas and 35 percent for 5 percent block areas.

  7. Measurement of Coherent π^{+} Production in Low Energy Neutrino-Carbon Scattering.

    PubMed

    Abe, K; Andreopoulos, C; Antonova, M; Aoki, S; Ariga, A; Assylbekov, S; Autiero, D; Ban, S; Barbi, M; Barker, G J; Barr, G; Bartet-Friburg, P; Batkiewicz, M; Bay, F; Berardi, V; Berkman, S; Bhadra, S; Blondel, A; Bolognesi, S; Bordoni, S; Boyd, S B; Brailsford, D; Bravar, A; Bronner, C; Buizza Avanzini, M; Calland, R G; Campbell, T; Cao, S; Caravaca Rodríguez, J; Cartwright, S L; Castillo, R; Catanesi, M G; Cervera, A; Cherdack, D; Chikuma, N; Christodoulou, G; Clifton, A; Coleman, J; Collazuol, G; Coplowe, D; Cremonesi, L; Dabrowska, A; De Rosa, G; Dealtry, T; Denner, P F; Dennis, S R; Densham, C; Dewhurst, D; Di Lodovico, F; Di Luise, S; Dolan, S; Drapier, O; Duffy, K E; Dumarchez, J; Dytman, S; Dziewiecki, M; Emery-Schrenk, S; Ereditato, A; Feusels, T; Finch, A J; Fiorentini, G A; Friend, M; Fujii, Y; Fukuda, D; Fukuda, Y; Furmanski, A P; Galymov, V; Garcia, A; Giffin, S G; Giganti, C; Gizzarelli, F; Gonin, M; Grant, N; Hadley, D R; Haegel, L; Haigh, M D; Hamilton, P; Hansen, D; Harada, J; Hara, T; Hartz, M; Hasegawa, T; Hastings, N C; Hayashino, T; Hayato, Y; Helmer, R L; Hierholzer, M; Hillairet, A; Himmel, A; Hiraki, T; Hirota, S; Hogan, M; Holeczek, J; Horikawa, S; Hosomi, F; Huang, K; Ichikawa, A K; Ieki, K; Ikeda, M; Imber, J; Insler, J; Intonti, R A; Irvine, T J; Ishida, T; Ishii, T; Iwai, E; Iwamoto, K; Izmaylov, A; Jacob, A; Jamieson, B; Jiang, M; Johnson, S; Jo, J H; Jonsson, P; Jung, C K; Kabirnezhad, M; Kaboth, A C; Kajita, T; Kakuno, H; Kameda, J; Karlen, D; Karpikov, I; Katori, T; Kearns, E; Khabibullin, M; Khotjantsev, A; Kielczewska, D; Kikawa, T; Kim, H; Kim, J; King, S; Kisiel, J; Knight, A; Knox, A; Kobayashi, T; Koch, L; Koga, T; Konaka, A; Kondo, K; Kopylov, A; Kormos, L L; Korzenev, A; Koshio, Y; Kropp, W; Kudenko, Y; Kurjata, R; Kutter, T; Lagoda, J; Lamont, I; Larkin, E; Lasorak, P; Laveder, M; Lawe, M; Lazos, M; Lindner, T; Liptak, Z J; Litchfield, R P; Li, X; Longhin, A; Lopez, J P; Ludovici, L; Lu, X; Magaletti, L; Mahn, K; Malek, M; Manly, S; Marino, A D; Marteau, J; Martin, J F; Martins, P; Martynenko, S; Maruyama, T; Matveev, V; Mavrokoridis, K; Ma, W Y; Mazzucato, E; McCarthy, M; McCauley, N; McFarland, K S; McGrew, C; Mefodiev, A; Metelko, C; Mezzetto, M; Mijakowski, P; Minamino, A; Mineev, O; Mine, S; Missert, A; Miura, M; Moriyama, S; Mueller, Th A; Murphy, S; Myslik, J; Nakadaira, T; Nakahata, M; Nakamura, K G; Nakamura, K; Nakamura, K D; Nakayama, S; Nakaya, T; Nakayoshi, K; Nantais, C; Nielsen, C; Nirkko, M; Nishikawa, K; Nishimura, Y; Novella, P; Nowak, J; O'Keeffe, H M; Ohta, R; Okumura, K; Okusawa, T; Oryszczak, W; Oser, S M; Ovsyannikova, T; Owen, R A; Oyama, Y; Palladino, V; Palomino, J L; Paolone, V; Patel, N D; Pavin, M; Payne, D; Perkin, J D; Petrov, Y; Pickard, L; Pickering, L; Pinzon Guerra, E S; Pistillo, C; Popov, B; Posiadala-Zezula, M; Poutissou, J-M; Poutissou, R; Przewlocki, P; Quilain, B; Radermacher, T; Radicioni, E; Ratoff, P N; Ravonel, M; Rayner, M A M; Redij, A; Reinherz-Aronis, E; Riccio, C; Rojas, P; Rondio, E; Roth, S; Rubbia, A; Rychter, A; Sacco, R; Sakashita, K; Sánchez, F; Sato, F; Scantamburlo, E; Scholberg, K; Schoppmann, S; Schwehr, J; Scott, M; Seiya, Y; Sekiguchi, T; Sekiya, H; Sgalaberna, D; Shah, R; Shaikhiev, A; Shaker, F; Shaw, D; Shiozawa, M; Shirahige, T; Short, S; Smy, M; Sobczyk, J T; Sobel, H; Sorel, M; Southwell, L; Stamoulis, P; Steinmann, J; Stewart, T; Stowell, P; Suda, Y; Suvorov, S; Suzuki, A; Suzuki, K; Suzuki, S Y; Suzuki, Y; Tacik, R; Tada, M; Takahashi, S; Takeda, A; Takeuchi, Y; Tanaka, H K; Tanaka, H A; Terhorst, D; Terri, R; Thakore, T; Thompson, L F; Tobayama, S; Toki, W; Tomura, T; Touramanis, C; Tsukamoto, T; Tzanov, M; Uchida, Y; Vacheret, A; Vagins, M; Vallari, Z; Vasseur, G; Wachala, T; Wakamatsu, K; Walter, C W; Wark, D; Warzycha, W; Wascko, M O; Weber, A; Wendell, R; Wilkes, R J; Wilking, M J; Wilkinson, C; Wilson, J R; Wilson, R J; Yamada, Y; Yamamoto, K; Yamamoto, M; Yanagisawa, C; Yano, T; Yen, S; Yershov, N; Yokoyama, M; Yoo, J; Yoshida, K; Yuan, T; Yu, M; Zalewska, A; Zalipska, J; Zambelli, L; Zaremba, K; Ziembicki, M; Zimmerman, E D; Zito, M; Żmuda, J

    2016-11-04

    We report the first measurement of the flux-averaged cross section for charged current coherent π^{+} production on carbon for neutrino energies less than 1.5 GeV, and with a restriction on the final state phase space volume in the T2K near detector, ND280. Comparisons are made with predictions from the Rein-Sehgal coherent production model and the model by Alvarez-Ruso et al., the latter representing the first implementation of an instance of the new class of microscopic coherent models in a neutrino interaction Monte Carlo event generator. We observe a clear event excess above background, disagreeing with the null results reported by K2K and SciBooNE in a similar neutrino energy region. The measured flux-averaged cross sections are below those predicted by both the Rein-Sehgal and Alvarez-Ruso et al.

  8. Age group classification and gender detection based on forced expiratory spirometry.

    PubMed

    Cosgun, Sema; Ozbek, I Yucel

    2015-08-01

    This paper investigates the utility of forced expiratory spirometry (FES) test with efficient machine learning algorithms for the purpose of gender detection and age group classification. The proposed method has three main stages: feature extraction, training of the models and detection. In the first stage, some features are extracted from volume-time curve and expiratory flow-volume loop obtained from FES test. In the second stage, the probabilistic models for each gender and age group are constructed by training Gaussian mixture models (GMMs) and Support vector machine (SVM) algorithm. In the final stage, the gender (or age group) of test subject is estimated by using the trained GMM (or SVM) model. Experiments have been evaluated on a large database from 4571 subjects. The experimental results show that average correct classification rate performance of both GMM and SVM methods based on the FES test is more than 99.3 % and 96.8 % for gender and age group classification, respectively.

  9. Direct Numerical Simulation of Surfactant-Stabilized Emulsions Morphology and Shear Viscosity in Starting Shear Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roar Skartlien; Espen Sollum; Andreas Akselsen

    2012-07-01

    A 3D lattice Boltzmann model for two-phase flow with amphiphilic surfactant was used to investigate the evolution of emulsion morphology and shear stress in starting shear flow. The interfacial contributions were analyzed for low and high volume fractions and varying surfactant activity. A transient viscoelastic contribution to the emulsion rheology under constant strain rate conditions was attributed to the interfacial stress. For droplet volume fractions below 0.3 and an average capillary number of about 0.25, highly elliptical droplets formed. Consistent with affine deformation models, gradual elongation of the droplets increased the shear stress at early times and reduced it atmore » later times. Lower interfacial tension with increased surfactant activity counterbalanced the effect of increased interfacial area, and the net shear stress did not change significantly. For higher volume fractions, co-continuous phases with a complex topology were formed. The surfactant decreased the interfacial shear stress due mainly to advection of surfactant to higher curvature areas. Our results are in qualitative agreement with experimental data for polymer blends in terms of transient interfacial stresses and limited enhancement of the emulsion viscosity at larger volume fractions where the phases are co-continuous.« less

  10. Numerical solution of the Saint-Venant equations by an efficient hybrid finite-volume/finite-difference method

    NASA Astrophysics Data System (ADS)

    Lai, Wencong; Khan, Abdul A.

    2018-04-01

    A computationally efficient hybrid finite-volume/finite-difference method is proposed for the numerical solution of Saint-Venant equations in one-dimensional open channel flows. The method adopts a mass-conservative finite volume discretization for the continuity equation and a semi-implicit finite difference discretization for the dynamic-wave momentum equation. The spatial discretization of the convective flux term in the momentum equation employs an upwind scheme and the water-surface gradient term is discretized using three different schemes. The performance of the numerical method is investigated in terms of efficiency and accuracy using various examples, including steady flow over a bump, dam-break flow over wet and dry downstream channels, wetting and drying in a parabolic bowl, and dam-break floods in laboratory physical models. Numerical solutions from the hybrid method are compared with solutions from a finite volume method along with analytic solutions or experimental measurements. Comparisons demonstrates that the hybrid method is efficient, accurate, and robust in modeling various flow scenarios, including subcritical, supercritical, and transcritical flows. In this method, the QUICK scheme for the surface slope discretization is more accurate and less diffusive than the center difference and the weighted average schemes.

  11. SU-E-T-122: Anisotropic Analytical Algorithm (AAA) Vs. Acuros XB (AXB) in Stereotactic Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mynampati, D; Scripes, P Godoy; Kuo, H

    2015-06-15

    Purpose: To evaluate dosimetric differences between superposition beam model (AAA) and determinant photon transport solver (AXB) in lung SBRT and Cranial SRS dose computations. Methods: Ten Cranial SRS and ten Lung SBRT plans using Varian, AAA -11.0 were re-planned using Acuros -XB-11.0 with fixed MU. 6MV photon Beam model with HD120-MLC used for dose calculations. Four non-coplanar conformal arcs used to deliver 21Gy or 18Gy to SRS targets (0.4 to 6.2cc). 54Gy (3Fractions) or 50Gy (5Fractions) was planned for SBRT targets (7.3 to 13.9cc) using two VAMT non-coplanar arcs. Plan comparison parameters were dose to 1% PTV volume (D1), dosemore » to 99% PTV volume( D99), Target mean (Dmean), Conformity index (ratio of prescription isodose volume to PTV), Homogeneity Index [ (D2%-D98%)/Dmean] and R50 (ratio of 50% of prescription isodose volume to PTV). OAR parameters were Brain volume receiving 12Gy dose (V12Gy) and maximum dose (D0.03) to Brainstem for SRS. For lung SBRT, maximum dose to Heart and Cord, Mean lung dose (MLD) and volume of lung receiving 20Gy (V20Gy) were computed. PTV parameters compared by percentage difference between AXB and AAA parameters. OAR parameters and HI compared by absolute difference between two calculations. For analysis, paired t-test performed over the parameters. Results: Compared to AAA, AXB SRS plans have on average 3.2% lower D99, 6.5% lower CI and 3cc less Brain-V12. However, AXB SBRT plans have higher D1, R50 and Dmean by 3.15%, 1.63% and 2.5%. For SRS and SBRT, AXB plans have average HI 2 % and 4.4% higher than AAA plans. In both techniques, all other parameters vary within 1% or 1Gy. In both sets only two parameters have P>0.05. Conclusion: Even though t-test results signify difference between AXB and AAA plans, dose differences in dose estimations by both algorithms are clinically insignificant.« less

  12. Frequency-domain optical tomographic image reconstruction algorithm with the simplified spherical harmonics (SP3) light propagation model.

    PubMed

    Kim, Hyun Keol; Montejo, Ludguier D; Jia, Jingfei; Hielscher, Andreas H

    2017-06-01

    We introduce here the finite volume formulation of the frequency-domain simplified spherical harmonics model with n -th order absorption coefficients (FD-SP N ) that approximates the frequency-domain equation of radiative transfer (FD-ERT). We then present the FD-SP N based reconstruction algorithm that recovers absorption and scattering coefficients in biological tissue. The FD-SP N model with 3 rd order absorption coefficient (i.e., FD-SP 3 ) is used as a forward model to solve the inverse problem. The FD-SP 3 is discretized with a node-centered finite volume scheme and solved with a restarted generalized minimum residual (GMRES) algorithm. The absorption and scattering coefficients are retrieved using a limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. Finally, the forward and inverse algorithms are evaluated using numerical phantoms with optical properties and size that mimic small-volume tissue such as finger joints and small animals. The forward results show that the FD-SP 3 model approximates the FD-ERT (S 12 ) solution within relatively high accuracy; the average error in the phase (<3.7%) and the amplitude (<7.1%) of the partial current at the boundary are reported. From the inverse results we find that the absorption and scattering coefficient maps are more accurately reconstructed with the SP 3 model than those with the SP 1 model. Therefore, this work shows that the FD-SP 3 is an efficient model for optical tomographic imaging of small-volume media with non-diffuse properties both in terms of computational time and accuracy as it requires significantly lower CPU time than the FD-ERT (S 12 ) and also it is more accurate than the FD-SP 1 .

  13. First Measurements of the HCFC-142b Trend from Atmospheric Chemistry Experiment (ACE) Solar Occultation Spectra

    NASA Technical Reports Server (NTRS)

    Rinsland, Curtis P.; Chiou, Linda; Boone,Chris; Bernath, Peter; Mahieu, Emmanuel

    2009-01-01

    The first measurement of the HCFC-142b (CH3CClF2) trend near the tropopause has been derived from volume mixing ratio (VMR) measurements at northern and southern hemisphere mid-latitudes for the 2004-2008 time period from spaceborne solar occultation observations recorded at 0.02/cm resolution with the ACE (atmospheric chemistry experiment) Fourier transform spectrometer. The HCFC-142b molecule is currently the third most abundant HCFC (hydrochlorofluorocarbon) in the atmosphere and ACE measurements over this time span show a continuous rise in its volume mixing ratio. Monthly average measurements at northern and southern hemisphere midlatitudes have similar increase rates that are consistent with surface trend measurements for a similar time span. A mean northern hemisphere profile for the time span shows a near constant VMR at 8-20km altitude range, consistent on average for the same time span with in situ results. The nearly constant vertical VMR profile also agrees with model predictions of a long lifetime in the lower atmosphere.

  14. 78 FR 75396 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-11

    ... Specify the Exclusion of Odd Lot Transactions From Consolidated Average Daily Volume Calculations for a Limited Period of Time for Purposes of Certain Transaction Pricing on the Exchange Through January 31... specify the exclusion of odd lot transactions from consolidated average daily volume (``CADV...

  15. Whole stand volume tables for quaking aspen in the Rocky Mountains

    Treesearch

    Wayne D. Shepperd; H. Todd Mowrer

    1984-01-01

    Linear regression equations were developed to predict stand volumes for aspen given average stand basal area and average stand height. Tables constructed from these equations allow easy field estimation of gross merchantable cubic and board foot Scribner Rules per acre, and cubic meters per hectare using simple prism cruise data.

  16. A preview of Delaware's timber resource

    Treesearch

    Joseph E. Barnard; Teresa M. Bowers

    1973-01-01

    The recently completed forest survey of Delaware indicated little change in the total forest area since the 1957 estimate. Softwood volume and the acreage of softwood types decreased considerably. Hardwoods now comprise two-thirds of the volume and three-fourths of the forest area. Total average annual growth exceeded removals, but softwood removals exceeded average...

  17. IMRT head and neck treatment planning with a commercially available Monte Carlo based planning system

    NASA Astrophysics Data System (ADS)

    Boudreau, C.; Heath, E.; Seuntjens, J.; Ballivy, O.; Parker, W.

    2005-03-01

    The PEREGRINE Monte Carlo dose-calculation system (North American Scientific, Cranberry Township, PA) is the first commercially available Monte Carlo dose-calculation code intended specifically for intensity modulated radiotherapy (IMRT) treatment planning and quality assurance. In order to assess the impact of Monte Carlo based dose calculations for IMRT clinical cases, dose distributions for 11 head and neck patients were evaluated using both PEREGRINE and the CORVUS (North American Scientific, Cranberry Township, PA) finite size pencil beam (FSPB) algorithm with equivalent path-length (EPL) inhomogeneity correction. For the target volumes, PEREGRINE calculations predict, on average, a less than 2% difference in the calculated mean and maximum doses to the gross tumour volume (GTV) and clinical target volume (CTV). An average 16% ± 4% and 12% ± 2% reduction in the volume covered by the prescription isodose line was observed for the GTV and CTV, respectively. Overall, no significant differences were noted in the doses to the mandible and spinal cord. For the parotid glands, PEREGRINE predicted a 6% ± 1% increase in the volume of tissue receiving a dose greater than 25 Gy and an increase of 4% ± 1% in the mean dose. Similar results were noted for the brainstem where PEREGRINE predicted a 6% ± 2% increase in the mean dose. The observed differences between the PEREGRINE and CORVUS calculated dose distributions are attributed to secondary electron fluence perturbations, which are not modelled by the EPL correction, issues of organ outlining, particularly in the vicinity of air cavities, and differences in dose reporting (dose to water versus dose to tissue type).

  18. Exploring the use of random regression models with legendre polynomials to analyze measures of volume of ejaculate in Holstein bulls.

    PubMed

    Carabaño, M J; Díaz, C; Ugarte, C; Serrano, M

    2007-02-01

    Artificial insemination centers routinely collect records of quantity and quality of semen of bulls throughout the animals' productive period. The goal of this paper was to explore the use of random regression models with orthogonal polynomials to analyze repeated measures of semen production of Spanish Holstein bulls. A total of 8,773 records of volume of first ejaculate (VFE) collected between 12 and 30 mo of age from 213 Spanish Holstein bulls was analyzed under alternative random regression models. Legendre polynomial functions of increasing order (0 to 6) were fitted to the average trajectory, additive genetic and permanent environmental effects. Age at collection and days in production were used as time variables. Heterogeneous and homogeneous residual variances were alternatively assumed. Analyses were carried out within a Bayesian framework. The logarithm of the marginal density and the cross-validation predictive ability of the data were used as model comparison criteria. Based on both criteria, age at collection as a time variable and heterogeneous residuals models are recommended to analyze changes of VFE over time. Both criteria indicated that fitting random curves for genetic and permanent environmental components as well as for the average trajector improved the quality of models. Furthermore, models with a higher order polynomial for the permanent environmental (5 to 6) than for the genetic components (4 to 5) and the average trajectory (2 to 3) tended to perform best. High-order polynomials were needed to accommodate the highly oscillating nature of the phenotypic values. Heritability and repeatability estimates, disregarding the extremes of the studied period, ranged from 0.15 to 0.35 and from 0.20 to 0.50, respectively, indicating that selection for VFE may be effective at any stage. Small differences among models were observed. Apart from the extremes, estimated correlations between ages decreased steadily from 0.9 and 0.4 for measures 1 mo apart to 0.4 and 0.2 for most distant measures for additive genetic and phenotypic components, respectively. Further investigation to account for environmental factors that may be responsible for the oscillating observations of VFE is needed.

  19. Model-based monitoring of stormwater runoff quality.

    PubMed

    Birch, Heidi; Vezzaro, Luca; Mikkelsen, Peter Steen

    2013-01-01

    Monitoring of micropollutants (MP) in stormwater is essential to evaluate the impacts of stormwater on the receiving aquatic environment. The aim of this study was to investigate how different strategies for monitoring of stormwater quality (combining a model with field sampling) affect the information obtained about MP discharged from the monitored system. A dynamic stormwater quality model was calibrated using MP data collected by automatic volume-proportional sampling and passive sampling in a storm drainage system on the outskirts of Copenhagen (Denmark) and a 10-year rain series was used to find annual average (AA) and maximum event mean concentrations. Use of this model reduced the uncertainty of predicted AA concentrations compared to a simple stochastic method based solely on data. The predicted AA concentration, obtained by using passive sampler measurements (1 month installation) for calibration of the model, resulted in the same predicted level but with narrower model prediction bounds than by using volume-proportional samples for calibration. This shows that passive sampling allows for a better exploitation of the resources allocated for stormwater quality monitoring.

  20. Cosmological backreaction within the Szekeres model and emergence of spatial curvature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolejko, Krzysztof, E-mail: krzysztof.bolejko@sydney.edu.au

    This paper discusses the phenomenon of backreaction within the Szekeres model. Cosmological backreaction describes how the mean global evolution of the Universe deviates from the Friedmannian evolution. The analysis is based on models of a single cosmological environment and the global ensemble of the Szekeres models (of the Swiss-Cheese-type and Styrofoam-type). The obtained results show that non-linear growth of cosmic structures is associated with the growth of the spatial curvature Ω{sub R} (in the FLRW limit Ω{sub R} → Ω {sub k} ). If averaged over global scales the result depends on the assumed global model of the Universe. Withinmore » the Swiss-Cheese model, which does have a fixed background, the volume average follows the evolution of the background, and the global spatial curvature averages out to zero (the background model is the ΛCDM model, which is spatially flat). In the Styrofoam-type model, which does not have a fixed background, the mean evolution deviates from the spatially flat ΛCDM model, and the mean spatial curvature evolves from Ω{sub R} =0 at the CMB to Ω{sub R} ∼ 0.1 at 0 z =. If the Styrofoam-type model correctly captures evolutionary features of the real Universe then one should expect that in our Universe, the spatial curvature should build up (local growth of cosmic structures) and its mean global average should deviate from zero (backreaction). As a result, this paper predicts that the low-redshift Universe should not be spatially flat (i.e. Ω {sub k} ≠ 0, even if in the early Universe Ω {sub k} = 0) and therefore when analysing low- z cosmological data one should keep Ω {sub k} as a free parameter and independent from the CMB constraints.« less

  1. Cosmological backreaction within the Szekeres model and emergence of spatial curvature

    NASA Astrophysics Data System (ADS)

    Bolejko, Krzysztof

    2017-06-01

    This paper discusses the phenomenon of backreaction within the Szekeres model. Cosmological backreaction describes how the mean global evolution of the Universe deviates from the Friedmannian evolution. The analysis is based on models of a single cosmological environment and the global ensemble of the Szekeres models (of the Swiss-Cheese-type and Styrofoam-type). The obtained results show that non-linear growth of cosmic structures is associated with the growth of the spatial curvature ΩScript R (in the FLRW limit ΩScript R → Ωk). If averaged over global scales the result depends on the assumed global model of the Universe. Within the Swiss-Cheese model, which does have a fixed background, the volume average follows the evolution of the background, and the global spatial curvature averages out to zero (the background model is the ΛCDM model, which is spatially flat). In the Styrofoam-type model, which does not have a fixed background, the mean evolution deviates from the spatially flat ΛCDM model, and the mean spatial curvature evolves from ΩScript R =0 at the CMB to ΩScript R ~ 0.1 at 0z =. If the Styrofoam-type model correctly captures evolutionary features of the real Universe then one should expect that in our Universe, the spatial curvature should build up (local growth of cosmic structures) and its mean global average should deviate from zero (backreaction). As a result, this paper predicts that the low-redshift Universe should not be spatially flat (i.e. Ωk ≠ 0, even if in the early Universe Ωk = 0) and therefore when analysing low-z cosmological data one should keep Ωk as a free parameter and independent from the CMB constraints.

  2. Studying Turbulence Using Numerical Simulation Databases - X Proceedings of the 2004 Summer Program

    NASA Technical Reports Server (NTRS)

    Moin, Parviz; Mansour, Nagi N.

    2004-01-01

    This Proceedings volume contains 32 papers that span a wide range of topics that reflect the ubiquity of turbulence. The papers have been divided into six groups: 1) Solar Simulations; 2) Magnetohydrodynamics (MHD); 3) Large Eddy Simulation (LES) and Numerical Simulations; 4) Reynolds Averaged Navier Stokes (RANS) Modeling and Simulations; 5) Stability and Acoustics; 6) Combustion and Multi-Phase Flow.

  3. Investigating different computed tomography techniques for internal target volume definition.

    PubMed

    Yoganathan, S A; Maria Das, K J; Subramanian, V Siva; Raj, D Gowtham; Agarwal, Arpita; Kumar, Shaleen

    2017-01-01

    The aim of this work was to evaluate the various computed tomography (CT) techniques such as fast CT, slow CT, breath-hold (BH) CT, full-fan cone beam CT (FF-CBCT), half-fan CBCT (HF-CBCT), and average CT for delineation of internal target volume (ITV). In addition, these ITVs were compared against four-dimensional CT (4DCT) ITVs. Three-dimensional target motion was simulated using dynamic thorax phantom with target insert of diameter 3 cm for ten respiration data. CT images were acquired using a commercially available multislice CT scanner, and the CBCT images were acquired using On-Board-Imager. Average CT was generated by averaging 10 phases of 4DCT. ITVs were delineated for each CT by contouring the volume of the target ball; 4DCT ITVs were generated by merging all 10 phases target volumes. Incase of BH-CT, ITV was derived by boolean of CT phases 0%, 50%, and fast CT target volumes. ITVs determined by all CT and CBCT scans were significantly smaller (P < 0.05) than the 4DCT ITV, whereas there was no significant difference between average CT and 4DCT ITVs (P = 0.17). Fast CT had the maximum deviation (-46.1% ± 20.9%) followed by slow CT (-34.3% ± 11.0%) and FF-CBCT scans (-26.3% ± 8.7%). However, HF-CBCT scans (-12.9% ± 4.4%) and BH-CT scans (-11.1% ± 8.5%) resulted in almost similar deviation. On the contrary, average CT had the least deviation (-4.7% ± 9.8%). When comparing with 4DCT, all the CT techniques underestimated ITV. In the absence of 4DCT, the HF-CBCT target volumes with appropriate margin may be a reasonable approach for defining the ITV.

  4. [Experience of a Break-Even Point Analysis for Make-or-Buy Decision.].

    PubMed

    Kim, Yunhee

    2006-12-01

    Cost containment through continuous quality improvement of medical service is required in an age of a keen competition of the medical market. Laboratory managers should examine the matters on make-or-buy decision periodically. On this occasion, a break-even point analysis can be useful as an analyzing tool. In this study, cost accounting and break-even point (BEP) analysis were performed in case that the immunoassay items showing a recent increase in order volume were to be in-house made. Fixed and variable costs were calculated in case that alpha fetoprotein (AFP), carcinoembryonic antigen (CEA), prostate-specific antigen (PSA), ferritin, free thyroxine (fT4), triiodothyronine (T3), thyroid-stimulating hormone (TSH), CA 125, CA 19-9, and hepatitis B envelope antibody (HBeAb) were to be tested with Abbott AxSYM instrument. Break-even volume was calculated as fixed cost per year divided by purchasing cost per test minus variable cost per test and BEP ratio as total purchasing costs at break-even volume divided by total purchasing costs at actual annual volume. The average fixed cost per year of AFP, CEA, PSA, ferritin, fT4, T3, TSH, CA 125, CA 19-9, and HBeAb was 8,279,187 won and average variable cost per test, 3,786 won. Average break-even volume was 1,599 and average BEP ratio was 852%. Average BEP ratio without including quality costs such as calibration and quality control was 74%. Because the quality assurance of clinical tests cannot be waived, outsourcing all of 10 items was more adequate than in-house make at the present volume in financial aspect. BEP analysis was useful as a financial tool for make-or-buy decision, the common matter which laboratory managers meet with.

  5. Solute kinetics with short-daily home hemodialysis using slow dialysate flow rate.

    PubMed

    Kohn, Orly F; Coe, Fredric L; Ing, Todd S

    2010-01-01

    "NxStage System One()" is increasingly used for daily home hemodialysis. The ultrapure dialysate volumes are typically between 15 L and 30 L per dialysis, substantially smaller than the volumes used in conventional dialysis. In this study, the impact of the use of low dialysate volumes on the removal rates of solutes of different molecular weights and volumes of distribution was evaluated. Serum measurements before and after dialysis and total dialysate collection were performed over 30 times in 5 functionally anephric patients undergoing short-daily home hemodialysis (6 d/wk) over the course of 8 to 16 months. Measured solutes included beta(2) microglobulin (beta(2)M), phosphorus, urea nitrogen, and potassium. The average spent dialysate volume (dialysate plus ultrafiltrate) was 25.4+/-4.7 L and the dialysis duration was 175+/-15 min. beta(2) microglobulin clearance of the polyethersulfone dialyzer averaged 53+/-14 mL/min. Total beta(2)M recovered in the dialysate was 106+/-42 mg per treatment (n=38). Predialysis serum beta(2)M levels remained stable over the observation period. Phosphorus removal averaged 694+/-343 mg per treatment with a mean predialysis serum phosphorus of 5.2+/-1.8 mg/dL (n=34). Standard Kt/V averaged 2.5+/-0.3 per week and correlated with the dialysate-based weekly Kt/V. Weekly beta(2)M, phosphorus, and urea nitrogen removal in patients dialyzing 6 d/wk with these relatively low dialysate volumes compared favorably with values published for thrice weekly conventional and with short-daily hemodialysis performed with machines using much higher dialysate flow rates. Results of the present study were achieved, however, with an average of 17.5 hours of dialysis per week.

  6. Clinical Implementation of a Model-Based In Vivo Dose Verification System for Stereotactic Body Radiation Therapy-Volumetric Modulated Arc Therapy Treatments Using the Electronic Portal Imaging Device.

    PubMed

    McCowan, Peter M; Asuni, Ganiyu; Van Uytven, Eric; VanBeek, Timothy; McCurdy, Boyd M C; Loewen, Shaun K; Ahmed, Naseer; Bashir, Bashir; Butler, James B; Chowdhury, Amitava; Dubey, Arbind; Leylek, Ahmet; Nashed, Maged

    2017-04-01

    To report findings from an in vivo dosimetry program implemented for all stereotactic body radiation therapy patients over a 31-month period and discuss the value and challenges of utilizing in vivo electronic portal imaging device (EPID) dosimetry clinically. From December 2013 to July 2016, 117 stereotactic body radiation therapy-volumetric modulated arc therapy patients (100 lung, 15 spine, and 2 liver) underwent 602 EPID-based in vivo dose verification events. A developed model-based dose reconstruction algorithm calculates the 3-dimensional dose distribution to the patient by back-projecting the primary fluence measured by the EPID during treatment. The EPID frame-averaging was optimized in June 2015. For each treatment, a 3%/3-mm γ comparison between our EPID-derived dose and the Eclipse AcurosXB-predicted dose to the planning target volume (PTV) and the ≥20% isodose volume were performed. Alert levels were defined as γ pass rates <85% (lung and liver) and <80% (spine). Investigations were carried out for all fractions exceeding the alert level and were classified as follows: EPID-related, algorithmic, patient setup, anatomic change, or unknown/unidentified errors. The percentages of fractions exceeding the alert levels were 22.6% for lung before frame-average optimization and 8.0% for lung, 20.0% for spine, and 10.0% for liver after frame-average optimization. Overall, mean (± standard deviation) planning target volume γ pass rates were 90.7% ± 9.2%, 87.0% ± 9.3%, and 91.2% ± 3.4% for the lung, spine, and liver patients, respectively. Results from the clinical implementation of our model-based in vivo dose verification method using on-treatment EPID images is reported. The method is demonstrated to be valuable for routine clinical use for verifying delivered dose as well as for detecting errors. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Clinical Implementation of a Model-Based In Vivo Dose Verification System for Stereotactic Body Radiation Therapy–Volumetric Modulated Arc Therapy Treatments Using the Electronic Portal Imaging Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCowan, Peter M., E-mail: pmccowan@cancercare.mb.ca; Asuni, Ganiyu; Van Uytven, Eric

    Purpose: To report findings from an in vivo dosimetry program implemented for all stereotactic body radiation therapy patients over a 31-month period and discuss the value and challenges of utilizing in vivo electronic portal imaging device (EPID) dosimetry clinically. Methods and Materials: From December 2013 to July 2016, 117 stereotactic body radiation therapy–volumetric modulated arc therapy patients (100 lung, 15 spine, and 2 liver) underwent 602 EPID-based in vivo dose verification events. A developed model-based dose reconstruction algorithm calculates the 3-dimensional dose distribution to the patient by back-projecting the primary fluence measured by the EPID during treatment. The EPID frame-averaging was optimized in Junemore » 2015. For each treatment, a 3%/3-mm γ comparison between our EPID-derived dose and the Eclipse AcurosXB–predicted dose to the planning target volume (PTV) and the ≥20% isodose volume were performed. Alert levels were defined as γ pass rates <85% (lung and liver) and <80% (spine). Investigations were carried out for all fractions exceeding the alert level and were classified as follows: EPID-related, algorithmic, patient setup, anatomic change, or unknown/unidentified errors. Results: The percentages of fractions exceeding the alert levels were 22.6% for lung before frame-average optimization and 8.0% for lung, 20.0% for spine, and 10.0% for liver after frame-average optimization. Overall, mean (± standard deviation) planning target volume γ pass rates were 90.7% ± 9.2%, 87.0% ± 9.3%, and 91.2% ± 3.4% for the lung, spine, and liver patients, respectively. Conclusions: Results from the clinical implementation of our model-based in vivo dose verification method using on-treatment EPID images is reported. The method is demonstrated to be valuable for routine clinical use for verifying delivered dose as well as for detecting errors.« less

  8. Multivariate Statistical Models for Predicting Sediment Yields from Southern California Watersheds

    USGS Publications Warehouse

    Gartner, Joseph E.; Cannon, Susan H.; Helsel, Dennis R.; Bandurraga, Mark

    2009-01-01

    Debris-retention basins in Southern California are frequently used to protect communities and infrastructure from the hazards of flooding and debris flow. Empirical models that predict sediment yields are used to determine the size of the basins. Such models have been developed using analyses of records of the amount of material removed from debris retention basins, associated rainfall amounts, measures of watershed characteristics, and wildfire extent and history. In this study we used multiple linear regression methods to develop two updated empirical models to predict sediment yields for watersheds located in Southern California. The models are based on both new and existing measures of volume of sediment removed from debris retention basins, measures of watershed morphology, and characterization of burn severity distributions for watersheds located in Ventura, Los Angeles, and San Bernardino Counties. The first model presented reflects conditions in watersheds located throughout the Transverse Ranges of Southern California and is based on volumes of sediment measured following single storm events with known rainfall conditions. The second model presented is specific to conditions in Ventura County watersheds and was developed using volumes of sediment measured following multiple storm events. To relate sediment volumes to triggering storm rainfall, a rainfall threshold was developed to identify storms likely to have caused sediment deposition. A measured volume of sediment deposited by numerous storms was parsed among the threshold-exceeding storms based on relative storm rainfall totals. The predictive strength of the two models developed here, and of previously-published models, was evaluated using a test dataset consisting of 65 volumes of sediment yields measured in Southern California. The evaluation indicated that the model developed using information from single storm events in the Transverse Ranges best predicted sediment yields for watersheds in San Bernardino, Los Angeles, and Ventura Counties. This model predicts sediment yield as a function of the peak 1-hour rainfall, the watershed area burned by the most recent fire (at all severities), the time since the most recent fire, watershed area, average gradient, and relief ratio. The model that reflects conditions specific to Ventura County watersheds consistently under-predicted sediment yields and is not recommended for application. Some previously-published models performed reasonably well, while others either under-predicted sediment yields or had a larger range of errors in the predicted sediment yields.

  9. SU-E-T-579: Impact of Cylinder Size in High-Dose Rate Brachytherapy (HDRBT) for Primary Cancer in the Vagina

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, H; Gopalakrishnan, M; Lee, P

    2014-06-01

    Purpose: To evaluate the dosimetric impact of cylinder size in high dose rate Brachytherapy for primary vaginal cancers. Methods: Patients treated with HDR vaginal vault radiation in a list of cylinders ranging from 2.5 to 4 cm in diameter at 0.5 cm increment were analyzed. All patients’ doses were prescribed at the 0.5 cm from the vaginal surface with different treatment lengths. A series of reference points were created to optimize the dose distribution. The fraction dose was 5.5 Gy, the treatment was repeated for 4 times in two weeks. A cylinder volume was contoured in each case according tomore » the prescribed treatment length, and then expanded to 5 mm to get a volume Cylinder-5mm-exp. A volume of PTV-Eval was obtained by subtracting the cylinder volume from the Cylinder-5mm-exp. The shell volume, PTV-Eval serves as the target volume for dosimetric evaluation. Results: DVH curves and average doses of PTV-Eval were obtained. Our results indicated that the DVH curves shifted toward higher dose side when larger cylinder was used instead of smaller ones. When 3.0 cm cylinder was used instead of 2.5 cm, for 3.0 cm treatment length, the average dose only increased 1%, from 790 to 799 cGy. However, the average doses for 3.5 and 4 cm cylinders respectively are 932 and 1137 cGy at the same treatment length. For 5.0 cm treatment length, the average dose is 741 cGy for 2.5 cm cylinder, and 859 cGy for 3 cm cylinder. Conclusion: Our data analysis suggests that for the vaginal intracavitary HDRBT, the average dose is at least 35% larger than the prescribed dose in the studied cases; the size of the cylinder will impact the dose delivered to the target volume. The cylinder with bigger diameter tends to deliver larger average dose to the PTV-Eval.« less

  10. Incorporation of fragmentation into a volume average solidification model

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Wu, M.; Kharicha, A.; Ludwig, A.

    2018-01-01

    In this study, a volume average solidification model was extended to consider fragmentation as a source of equiaxed crystals during mixed columnar-equiaxed solidification. The formulation suggested for fragmentation is based on two hypotheses: the solute-driven remelting is the dominant mechanism; and the transport of solute-enriched melt through an interdendritic flow in the columnar growth direction is favorable for solute-driven remelting and is the necessary condition for fragment transportation. Furthermore, a test case with Sn-10 wt%Pb melt solidifying vertically downward in a 2D domain (50 × 60 mm2) was calculated to demonstrate the model’s features. Solidification started from the top boundary, and a columnar structure developed initially with its tip growing downward. Furthermore, thermo-solutal convection led to fragmentation in the mushy zone near the columnar tip front. The fragments transported out of the columnar region continued to grow and sink, and finally settled down and piled up in the bottom domain. The growing columnar structure from the top and pile-up of equiaxed crystals from the bottom finally led to a mixed columnar-equiaxed structure, in turn leading to a columnar-to-equiaxed transition (CET). A special macrosegregation pattern was also predicted, in which negative segregation occurred in both columnar and equiaxed regions and a relatively strong positive segregation occurred in the middle domain near the CET line. A parameter study was performed to verify the model capability, and the uncertainty of the model assumption and parameter was discussed.

  11. Thermal Pollution Math Model. Volume 1. Thermal Pollution Model Package Verification and Transfer. [environment impact of thermal discharges from power plants

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.

    1980-01-01

    Two three dimensional, time dependent models, one free surface, the other rigid lid, were verified at Anclote Anchorage and Lake Keowee respectively. The first site is a coastal site in northern Florida; the other is a man-made lake in South Carolina. These models describe the dispersion of heated discharges from power plants under the action of ambient conditions. A one dimensional, horizontally-averaged model was also developed and verified at Lake Keowee. The data base consisted of archival in situ measurements and data collected during field missions. The field missions were conducted during winter and summer conditions at each site. Each mission consisted of four infrared scanner flights with supporting ground truth and in situ measurements. At Anclote, special care was taken to characterize the complete tidal cycle. The three dimensional model results compared with IR data for thermal plumes on an average within 1 C root mean square difference. The one dimensional model performed satisfactorily in simulating the 1971-1979 period.

  12. Comparing two-zone models of dust exposure.

    PubMed

    Jones, Rachael M; Simmons, Catherine E; Boelter, Fred W

    2011-09-01

    The selection and application of mathematical models to work tasks is challenging. Previously, we developed and evaluated a semi-empirical two-zone model that predicts time-weighted average (TWA) concentrations (Ctwa) of dust emitted during the sanding of drywall joint compound. Here, we fit the emission rate and random air speed variables of a mechanistic two-zone model to testing event data and apply and evaluate the model using data from two field studies. We found that the fitted random air speed values and emission rate were sensitive to (i) the size of the near-field and (ii) the objective function used for fitting, but this did not substantially impact predicted dust Ctwa. The mechanistic model predictions were lower than the semi-empirical model predictions and measured respirable dust Ctwa at Site A but were within an acceptable range. At Site B, a 10.5 m3 room, the mechanistic model did not capture the observed difference between PBZ and area Ctwa. The model predicted uniform mixing and predicted dust Ctwa up to an order of magnitude greater than was measured. We suggest that applications of the mechanistic model be limited to contexts where the near-field volume is very small relative to the far-field volume.

  13. A depth-averaged debris-flow model that includes the effects of evolving dilatancy: II. Numerical predictions and experimental tests.

    USGS Publications Warehouse

    George, David L.; Iverson, Richard M.

    2014-01-01

    We evaluate a new depth-averaged mathematical model that is designed to simulate all stages of debris-flow motion, from initiation to deposition. A companion paper shows how the model’s five governing equations describe simultaneous evolution of flow thickness, solid volume fraction, basal pore-fluid pressure, and two components of flow momentum. Each equation contains a source term that represents the influence of state-dependent granular dilatancy. Here we recapitulate the equations and analyze their eigenstructure to show that they form a hyperbolic system with desirable stability properties. To solve the equations we use a shock-capturing numerical scheme with adaptive mesh refinement, implemented in an open-source software package we call D-Claw. As tests of D-Claw, we compare model output with results from two sets of large-scale debris-flow experiments. One set focuses on flow initiation from landslides triggered by rising pore-water pressures, and the other focuses on downstream flow dynamics, runout, and deposition. D-Claw performs well in predicting evolution of flow speeds, thicknesses, and basal pore-fluid pressures measured in each type of experiment. Computational results illustrate the critical role of dilatancy in linking coevolution of the solid volume fraction and pore-fluid pressure, which mediates basal Coulomb friction and thereby regulates debris-flow dynamics.

  14. Method to grow carbon thin films consisting entirely of diamond grains 3-5 nm in size and high-energy grain boundaries

    DOEpatents

    Carlisle, John A.; Auciello, Orlando; Birrell, James

    2006-10-31

    An ultrananocrystalline diamond (UNCD) having an average grain size between 3 and 5 nanometers (nm) with not more than about 8% by volume diamond having an average grain size larger than 10 nm. A method of manufacturing UNCD film is also disclosed in which a vapor of acetylene and hydrogen in an inert gas other than He wherein the volume ratio of acetylene to hydrogen is greater than 0.35 and less than 0.85, with the balance being an inert gas, is subjected to a suitable amount of energy to fragment at least some of the acetylene to form a UNCD film having an average grain size of 3 to 5 nm with not more than about 8% by volume diamond having an average grain size larger than 10 nm.

  15. Quantified moving average strategy of crude oil futures market based on fuzzy logic rules and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing

    2017-09-01

    The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.

  16. The Neutron Tomography Studies of the Rocks from the Kola Superdeep Borehole

    NASA Astrophysics Data System (ADS)

    Kichanov, S. E.; Kozlenko, D. P.; Ivankina, T. I.; Rutkauskas, A. V.; Lukin, E. V.; Savenko, B. N.

    The volume morphology of a gneiss sample K-8802 recovered from the deep of 8802 m of the Kola Superdeep Borehole and its surface homologue sample PL-36 have been studied by means of neutron radiography and tomography methods. The volumes and size distributions of a biotite-muscovite grains as well as grains orientation distribution have been obtained from experimental data. It was found that the average volumes of the biotite-muscovite grains in surface homologue sample is noticeably larger than the average volume of grains in the deep-seated gneiss sample K-8802. This drastically differences in grains volumes can be explained by the recrystallization processes in deep of the Kola Superdeep Borehole at high temperatures and high pressures.

  17. Precipitation-runoff, suspended-sediment, and flood-frequency characteristics for urbanized areas of Elmendorf Air Force Base, Alaska

    USGS Publications Warehouse

    Brabets, Timothy P.

    1999-01-01

    The developed part of Elmendorf Air Force Base near Anchorage, Alaska, consists of two basins with drainage areas of 4.0 and 0.64 square miles, respectively. Runoff and suspended-sediment data were collected from August 1996 to March 1998 to gain a basic understanding of the surface-water hydrology of these areas and to estimate flood-frequency characteristics. Runoff from the larger basin averaged 6 percent of rainfall, whereas runoff from the smaller basin averaged 13 percent of rainfall. During rainfall periods, the suspended-sediment load transported from the larger watershed ranged from 179 to 21,000 pounds and that from the smaller watershed ranged from 23 to 18,200 pounds. On a yield basis, suspended sediment from the larger watershed was 78 pounds per inch of runoff and from the smaller basin was 100 pounds per inch of runoff. Suspended-sediment loads and yields were generally lower during snowmelt periods than during rainfall periods. At each outfall of the two watersheds, water flows into steep natural channels. Suspended-sediment loads measured approximately 1,000 feet downstream from the outfalls during rainfall periods ranged from 8,450 to 530,000 pounds. On a yield basis, suspended sediment averaged 705 pounds per inch of runoff, more than three times as much as the combined sediment yield from the two watersheds. The increase in suspended sediment is most likely due to natural erosion of the streambanks. Streamflow data, collected in 1996 and 1997, were used to calibrate and verify a U.S. Geological Survey computer model?the Distributed Routing Rainfall Runoff Model-Version II (DR3M-II). The model was then used to simulate annual peak discharges and runoff volumes for 1981 to 1995 using historical rainfall records. Because the model indicated that surcharging (or ponding) would occur, no flood-frequency analysis was done for peak discharges. A flood-frequency analysis of flood volumes indicated that a 10-year flood would result in 0.39 inch of runoff (averaged over the entire drainage basin) from the larger watershed and 1.1 inches of runoff from the smaller watershed.

  18. Skiff-based Sonar/LiDAR Survey to Calibrate Reservoir Volumes for Watershed Sediment Yield Studies: Carmel River Example

    NASA Astrophysics Data System (ADS)

    Smith, D. P.; Kvitek, R.; Quan, S.; Iampietro, P.; Paddock, E.; Richmond, S. F.; Gomez, K.; Aiello, I. W.; Consulo, P.

    2009-12-01

    Models of watershed sediment yield are complicated by spatial and temporal variability of geologic substrate, land cover, and precipitation parameters. Episodic events such as ENSO cycles and severe wildfire are frequent enough to matter in the long-term average yield, and they can produce short-lived, extreme geomorphic responses. The sediment yield from extreme events is difficult to accurately capture because of the obvious dangers associated with field measurements during flood conditions, but it is critical to include extreme values for developing realistic models of rainfall-sediment yield relations, and for calculating long term average denudation rates. Dammed rivers provide a time-honored natural laboratory for quantifying average annual sediment yield and extreme-event sediment yield. While lead-line surveys of the past provided crude estimates of reservoir sediment trapping, recent advances in geospatial technology now provide unprecedented opportunities to improve volume change measurements. High-precision digital elevation models surveyed on an annual basis, or before-and-after specific rainfall-runoff events can be used to quantify relations between rainfall and sediment yield as a function of landscape parameters, including spatially explicit fire intensity. The Basin-Complex Fire of June and July 2008 resulted in moderate to severe burns in the 114 km^2 portion of the Carmel River watershed above Los Padres Dam. The US Geological Survey produced a debris flow probability/volume model for the region indicating that the reservoir could lose considerable capacity if intense enough precipitation occurred in the 2009-10 winter. Loss of Los Padres reservoir capacity has implications for endangered steelhead and red-legged frogs, and groundwater on municipal water supply. In anticipation of potentially catastrophic erosion, we produced an accurate volume calculation of the Los Padres reservoir in fall 2009, and locally monitored hillslope and fluvial processes during winter months. The pre-runoff reservoir volume was developed by collecting and merging sonar and LiDAR data from a small research skiff equipped with a high-precision positioning and attitude-correcting system. The terrestrial LiDAR data were augmented with shore-based total station positioning. Watershed monitoring included benchmarked serial stream surveys and semi-quantitative assessment of a variety of near-channel colluvial processes. Rainfall in the 2009-10 water year was not intense enough to trigger widespread debris flows of slope failure in the burned watershed, but dry ravel was apparently accelerated. The geomorphic analysis showed that sediment yield was not significantly higher during this low-rainfall year, despite the wide-spread presence of very steep, fire-impacted slopes. Because there was little to no increase in sediment yield this year, we have postponed our second reservoir survey. A predicted ENSO event that might bring very intense rains to the watershed is currently predicted for winter 2009-10.

  19. Public-Private Ventures in Bachelor Quarters. A Solution to the Loss of Military Construction Projects. Volume 2. Appendices A through E

    DTIC Science & Technology

    1990-06-01

    appropriated travel funds. For a P/PV BEQ, it is not a user input; the model has an internal industry average value . For a IMILCON BEQ, this would be...has an internal value , and it is not a user input. Rooms Expense: Laundry and Dry Cleaning Any laundry and dry cleaning services to be provided at no...planned rooms and then divided by 365 days, to arrive at the MILCON cost factor for entry into the model. For a P/PV BQ, the model has an internal value

  20. Role for Lower Extremity Interstitial Fluid Volume Changes in the Development of Orthostasis after Simulated Microgravity

    NASA Technical Reports Server (NTRS)

    Platts, Steven H.; Summers, Richard L.; Martin, David S.; Meck, Janice V.; Coleman, Thomas G.

    2007-01-01

    Reentry orthostasis after exposure to the conditions of spaceflight is a persistent problem among astronauts. In a previous study, a computer model systems analysis was used to examine the physiologic mechanisms involved in this phenomenon. In this analysis, it was determined that an augmented capacitance of lower extremity veins due to a fluid volume contracture of the surrounding interstitial spaces during spaceflight results in an increase in sequestered blood volume upon standing and appears to be the initiating mechanism responsible for reentry orthostasis. In this study, we attempt to validate the central premise of this hypothesis using a ground-based spaceflight analog. 10 healthy subjects were placed at bed rest in a 6 head down tilt position for 60 days of bed rest. The impact of adaptations in interstitial fluid volume and venous capacitance in the lower extremities were then observed during a standard tilt test protocol performed before and after the confinement period. The interstitial thickness superficial to the calcaneous immediately below the lateral malleolus was measured using ultrasound with a 17-5 MHz linear array transducer. Measurements of the changes in anterior tibial vein diameter during tilt were obtained by similar methods. The measurements were taken while the subjects were supine and then during upright tilt (80') for thirty minutes, or until the subject had signs of presyncope. Additional measurements of the superficial left tibia interstitial thickness and stroke volume by standard echocardiographic methods were also recorded. In addition, calf compliance was measured over a pressure range of 10-60 mmHg, using plethysmography, in a subset of these subjects (n = 5). There was a average of 6% diminution in the size of the lower extremity interstitial space as compared to measurements acquired prior to bed rest. This contracture of the interstitial space coincided with a subsequent relative increase in the percentage change in tibial vein diameter and stroke volume upon tilting in contrast to the observations made before bed rest (54 vs 23% respectively). Compliance in the calf increased by an average of 36% by day 27 of bedrest. A systems analysis using a computer model of cardiovascular physiology suggests that microgravity induced interstitial volume depletion results in an accentuation of venous blood volume sequestration and is the initiating event in reentry orthostasis. This hypothesis was tested in volunteer subjects using a ground-based spaceflight analog model that simulated the body fluid redistribution induced by microgravity exposure. Measurements of changes in the interstitial spaces and observed responses of the anterior tibial vein with tilt, together with the increase in calf compliance, were consistent with our proposed mechanism for the initiation of postflight orthostasis often seen in astronauts.

  1. A system for accurate and automated injection of hyperpolarized substrate with minimal dead time and scalable volumes over a large range

    NASA Astrophysics Data System (ADS)

    Reynolds, Steven; Bucur, Adriana; Port, Michael; Alizadeh, Tooba; Kazan, Samira M.; Tozer, Gillian M.; Paley, Martyn N. J.

    2014-02-01

    Over recent years hyperpolarization by dissolution dynamic nuclear polarization has become an established technique for studying metabolism in vivo in animal models. Temporal signal plots obtained from the injected metabolite and daughter products, e.g. pyruvate and lactate, can be fitted to compartmental models to estimate kinetic rate constants. Modeling and physiological parameter estimation can be made more robust by consistent and reproducible injections through automation. An injection system previously developed by us was limited in the injectable volume to between 0.6 and 2.4 ml and injection was delayed due to a required syringe filling step. An improved MR-compatible injector system has been developed that measures the pH of injected substrate, uses flow control to reduce dead volume within the injection cannula and can be operated over a larger volume range. The delay time to injection has been minimized by removing the syringe filling step by use of a peristaltic pump. For 100 μl to 10.000 ml, the volume range typically used for mice to rabbits, the average delivered volume was 97.8% of the demand volume. The standard deviation of delivered volumes was 7 μl for 100 μl and 20 μl for 10.000 ml demand volumes (mean S.D. was 9 ul in this range). In three repeat injections through a fixed 0.96 mm O.D. tube the coefficient of variation for the area under the curve was 2%. For in vivo injections of hyperpolarized pyruvate in tumor-bearing rats, signal was first detected in the input femoral vein cannula at 3-4 s post-injection trigger signal and at 9-12 s in tumor tissue. The pH of the injected pyruvate was 7.1 ± 0.3 (mean ± S.D., n = 10). For small injection volumes, e.g. less than 100 μl, the internal diameter of the tubing contained within the peristaltic pump could be reduced to improve accuracy. Larger injection volumes are limited only by the size of the receiving vessel connected to the pump.

  2. A predictive parameter estimation approach for the thermodynamically constrained averaging theory applied to diffusion in porous media

    NASA Astrophysics Data System (ADS)

    Valdes-Parada, F. J.; Ostvar, S.; Wood, B. D.; Miller, C. T.

    2017-12-01

    Modeling of hierarchical systems such as porous media can be performed by different approaches that bridge microscale physics to the macroscale. Among the several alternatives available in the literature, the thermodynamically constrained averaging theory (TCAT) has emerged as a robust modeling approach that provides macroscale models that are consistent across scales. For specific closure relation forms, TCAT models are expressed in terms of parameters that depend upon the physical system under study. These parameters are usually obtained from inverse modeling based upon either experimental data or direct numerical simulation at the pore scale. Other upscaling approaches, such as the method of volume averaging, involve an a priori scheme for parameter estimation for certain microscale and transport conditions. In this work, we show how such a predictive scheme can be implemented in TCAT by studying the simple problem of single-phase passive diffusion in rigid and homogeneous porous media. The components of the effective diffusivity tensor are predicted for several porous media by solving ancillary boundary-value problems in periodic unit cells. The results are validated through a comparison with data from direct numerical simulation. This extension of TCAT constitutes a useful advance for certain classes of problems amenable to this estimation approach.

  3. Phase-field simulations of coherent precipitate morphologies and coarsening kinetics

    NASA Astrophysics Data System (ADS)

    Vaithyanathan, Venugopalan

    2002-09-01

    The primary aim of this research is to enhance the fundamental understanding of coherent precipitation reactions in advanced metallic alloys. The emphasis is on a particular class of precipitation reactions which result in ordered intermetallic precipitates embedded in a disordered matrix. These precipitation reactions underlie the development of high-temperature Ni-base superalloys and ultra-light aluminum alloys. Phase-field approach, which has emerged as the method of choice for modeling microstructure evolution, is employed for this research with the focus on factors that control the precipitate morphologies and coarsening kinetics, such as precipitate volume fractions and lattice mismatch between precipitates and matrix. Two types of alloy systems are considered. The first involves L1 2 ordered precipitates in a disordered cubic matrix, in an attempt to model the gamma' precipitates in Ni-base superalloys and delta' precipitates in Al-Li alloys. The effect of volume fraction on coarsening kinetics of gamma' precipitates was investigated using two-dimensional (2D) computer simulations. With increase in volume fraction, larger fractions of precipitates were found to have smaller aspect ratios in the late stages of coarsening, and the precipitate size distributions became wider and more positively skewed. The most interesting result was associated with the effect of volume fraction on the coarsening rate constant. Coarsening rate constant as a function of volume fraction extracted from the cubic growth law of average half-edge length was found to exhibit three distinct regimes: anomalous behavior or decreasing rate constant with volume fraction at small volume fractions ( ≲ 20%), volume fraction independent or constant behavior for intermediate volume fractions (˜20--50%), and the normal behavior or increasing rate constant with volume fraction for large volume fractions ( ≳ 50%). The second alloy system considered was Al-Cu with the focus on understanding precipitation of metastable tetragonal theta'-Al 2Cu in a cubic Al solid solution matrix. In collaboration with Chris Wolverton at Ford Motor Company, a multiscale model, which involves a novel combination of first-principles atomistic calculations with a mesoscale phase-field microstructure model, was developed. Reliable energetics in the form of bulk free energy, interfacial energy and parameters for calculating the elastic energy were obtained using accurate first-principles calculations. (Abstract shortened by UMI.)

  4. 78 FR 75432 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-11

    ... Proposed Rule Change To Amend Its Price List To Specify the Exclusion of Odd Lot Transactions From Consolidated Average Daily Volume Calculations for a Limited Period of Time for Purposes of Certain Transaction... transactions from consolidated average daily volume (``CADV'') calculations for a limited period of time for...

  5. Planted Pines do not Respond to Bedding on an Acadia-Beauregard-Kolin Silt Loam Site

    Treesearch

    James D. Haywood

    1980-01-01

    Average height and volume of loblolly and slash pines were not affected by site treatment or soil differences 15 years after planting on an Acadia-Beauregard-Kolin silt loam site. Slash pine averaged 2.04 m more in height and yielded 22 percent more volume per hectare than did loblolly pine.

  6. Model of the hydrodynamic loads applied on a rotating halfbridge belonging to a circular settling tank

    NASA Astrophysics Data System (ADS)

    Dascalescu, A. E.; Lazaroiu, G.; Scupi, A. A.; Oanta, E.

    2016-08-01

    The rotating half-bridge of a settling tank is employed to sweep the sludge from the wastewater and to vacuum and sent it to the central collector. It has a complex geometry but the main beam may be considered a slender bar loaded by the following category of forces: concentrated forces produced by the weight of the scrapping system of blades, suction pipes, local sludge collecting chamber, plus the sludge in the horizontal sludge transporting pipes; forces produced by the access bridge; buoyant forces produced by the floating barrels according to Archimedes’ principle; distributed forces produced by the weight of the main bridge; hydrodynamic forces. In order to evaluate the hydrodynamic loads we have conceived a numerical model based on the finite volume method, using the ANSYS-Fluent software. To model the flow we used the equations of Reynolds Averaged Navier-Stokes (RANS) for liquids together with Volume of Fluid model (VOF) for multiphase flows. For turbulent model k-epsilon we used the equation for turbulent kinetic energy k and dissipation epsilon. These results will be used to increase the accuracy of the loads’ sub-model in the theoretical models, e. the finite element model and the analytical model.

  7. A compressible Navier-Stokes solver with two-equation and Reynolds stress turbulence closure models

    NASA Technical Reports Server (NTRS)

    Morrison, Joseph H.

    1992-01-01

    This report outlines the development of a general purpose aerodynamic solver for compressible turbulent flows. Turbulent closure is achieved using either two equation or Reynolds stress transportation equations. The applicable equation set consists of Favre-averaged conservation equations for the mass, momentum and total energy, and transport equations for the turbulent stresses and turbulent dissipation rate. In order to develop a scheme with good shock capturing capabilities, good accuracy and general geometric capabilities, a multi-block cell centered finite volume approach is used. Viscous fluxes are discretized using a finite volume representation of a central difference operator and the source terms are treated as an integral over the control volume. The methodology is validated by testing the algorithm on both two and three dimensional flows. Both the two equation and Reynolds stress models are used on a two dimensional 10 degree compression ramp at Mach 3, and the two equation model is used on the three dimensional flow over a cone at angle of attack at Mach 3.5. With the development of this algorithm, it is now possible to compute complex, compressible high speed flow fields using both two equation and Reynolds stress turbulent closure models, with the capability of eventually evaluating their predictive performance.

  8. Rational design of an on-site volume reduction system for source-separated urine.

    PubMed

    Pahore, Muhammad Masoom; Ito, Ryusei; Funamizu, Naoyuki

    2010-04-01

    Human urine contains nitrogen, phosphorus and potassium, which can be applied as fertilizer in agriculture, replacing commercial fertilizer. However, owing to the low nutrient content of the urine, huge quantities must be transported to farmland to meet the nutrient demand of crops. This highly increases the transportation cost for the farmers. To address the transportation issue, a new on-site volume reduction system was tested at the laboratory scale based on water evaporation from vertical gauze sheets. A mathematical water transport model was proposed to evaluate the performance of the system. The mass transfer coefficient and the resistance of water flow through the sheet in the water transport model were obtained from the experiments. The results agreed with the simulated data, thereby confirming the proposed model. The model was then applied to the dry climate of southern Pakistan, having an air temperature of 30-40 degrees C and air humidity of 20-40%, for an 80% volume reduction of 10 L urine per day, which corresponds to a family of 10 members (average for a household in Pakistan). The findings revealed that the estimated size of the vertical sheet is 440-2060 cm2, which is only a small area for setting up the system at a household level.

  9. A two-phase debris-flow model that includes coupled evolution of volume fractions, granular dilatancy, and pore-fluid pressure

    USGS Publications Warehouse

    George, David L.; Iverson, Richard M.

    2011-01-01

    Pore-fluid pressure plays a crucial role in debris flows because it counteracts normal stresses at grain contacts and thereby reduces intergranular friction. Pore-pressure feedback accompanying debris deformation is particularly important during the onset of debrisflow motion, when it can dramatically influence the balance of forces governing downslope acceleration. We consider further effects of this feedback by formulating a new, depth-averaged mathematical model that simulates coupled evolution of granular dilatancy, solid and fluid volume fractions, pore-fluid pressure, and flow depth and velocity during all stages of debris-flow motion. To illustrate implications of the model, we use a finite-volume method to compute one-dimensional motion of a debris flow descending a rigid, uniformly inclined slope, and we compare model predictions with data obtained in large-scale experiments at the USGS debris-flow flume. Predictions for the first 1 s of motion show that increasing pore pressures (due to debris contraction) cause liquefaction that enhances flow acceleration. As acceleration continues, however, debris dilation causes dissipation of pore pressures, and this dissipation helps stabilize debris-flow motion. Our numerical predictions of this process match experimental data reasonably well, but predictions might be improved by accounting for the effects of grain-size segregation.

  10. A comparison of estimates of basin-scale soil-moisture evapotranspiration and estimates of riparian groundwater evapotranspiration with implications for water budgets in the Verde Valley, Central Arizona, USA

    USGS Publications Warehouse

    Tillman, Fred; Wiele, Stephen M.; Pool, Donald R.

    2015-01-01

    Population growth in the Verde Valley in Arizona has led to efforts to better understand water availability in the watershed. Evapotranspiration (ET) is a substantial component of the water budget and a critical factor in estimating groundwater recharge in the area. In this study, four estimates of ET are compared and discussed with applications to the Verde Valley. Higher potential ET (PET) rates from the soil-water balance (SWB) recharge model resulted in an average annual ET volume about 17% greater than for ET from the basin characteristics (BCM) recharge model. Annual BCM PET volume, however, was greater by about a factor of 2 or more than SWB actual ET (AET) estimates, which are used in the SWB model to estimate groundwater recharge. ET also was estimated using a method that combines MODIS-EVI remote sensing data and geospatial information and by the MODFLOW-EVT ET package as part of a regional groundwater-flow model that includes the study area. Annual ET volumes were about same for upper-bound MODIS-EVI ET for perennial streams as for the MODFLOW ET estimates, with the small differences between the two methods having minimal impact on annual or longer groundwater budgets for the study area.

  11. Spectroscopic Detection of COClF in the Tropical and Mid-Latitude Lower Stratosphere

    NASA Technical Reports Server (NTRS)

    Rinsland, Curtis P.; Nassar, Ray; Boone, Chris D.; Bernath, Peter; Chiou, Linda; Weisenstein, Debra K.; Mahieu, Emmanuel; Zander, Rodolphe

    2007-01-01

    We report retrievals of COClF (carbonyl chlorofluoride) based on atmospheric chemistry experiment (ACE) solar occultation spectra recorded at tropical and mid-latitudes during 2004-2005. The COClF molecule is a temporary reservoir of both chlorine and fluorine and has not been measured previously by remote sensing. A maximum COClF mixing ratio of 99.7+/-48.0 pptv (10(exp -12) per unit volume, 1 sigma) is measured at 28km for tropical and subtropical occultations (latitudes below 20deg in both hemispheres) with lower mixing ratios at both higher and lower altitudes. Northern hemisphere mid-latitude mixing ratios (30-50degN) resulted in an average profile with a peak mixing ratio of 51.7+/-32.1 pptv, 1 sigma, at 27 km, also decreasing above and below that altitude. We compare the measured average profiles with the one reported set of in situ lower stratospheric mid-latitude measurements from 1986 and 1987, a previous two-dimensional (2-D) model calculation for 1987 and 1993, and a 2-D-model prediction for 2004. The measured average tropical profile is in close agreement with the model prediction; the northern mid-latitude profile is also consistent, although the peak in the measured profile occurs at a higher altitude (2.5-4.5km offset) than in the model prediction. Seasonal average 2-D-model predictions of the COClF stratospheric distribution for 2004 are also reported.

  12. Rapid recipe formulation for plasma etching of new materials

    NASA Astrophysics Data System (ADS)

    Chopra, Meghali; Zhang, Zizhuo; Ekerdt, John; Bonnecaze, Roger T.

    2016-03-01

    A fast and inexpensive scheme for etch rate prediction using flexible continuum models and Bayesian statistics is demonstrated. Bulk etch rates of MgO are predicted using a steady-state model with volume-averaged plasma parameters and classical Langmuir surface kinetics. Plasma particle and surface kinetics are modeled within a global plasma framework using single component Metropolis Hastings methods and limited data. The accuracy of these predictions is evaluated with synthetic and experimental etch rate data for magnesium oxide in an ICP-RIE system. This approach is compared and superior to factorial models generated from JMP, a software package frequently employed for recipe creation and optimization.

  13. Flow and axial dispersion in a sinusoidal-walled tube: Effects of inertial and unsteady flows

    NASA Astrophysics Data System (ADS)

    Richmond, Marshall C.; Perkins, William A.; Scheibe, Timothy D.; Lambert, Adam; Wood, Brian D.

    2013-12-01

    In this work, we consider a sinusoidal-walled tube (a three-dimensional tube with sinusoidally-varying diameter) as a simplified conceptualization of flow in porous media. Direct numerical simulation using computational fluid dynamics (CFD) methods was used to compute velocity fields by solving the Navier-Stokes equations, and also to numerically solve the volume averaging closure problem, for a range of Reynolds numbers (Re) spanning the low-Re to inertial flow regimes, including one simulation at Re=449 for which unsteady flow was observed. The longitudinal dispersion observed for the flow was computed using a random walk particle tracking method, and this was compared to the longitudinal dispersion predicted from a volume-averaged macroscopic mass balance using the method of volume averaging; the results of the two methods were consistent. Our results are compared to experimental measurements of dispersion in porous media and to previous theoretical results for both the low-Re, Stokes flow regime and for values of Re representing the steady inertial regime. In the steady inertial regime, a power-law increase in the effective longitudinal dispersion (DL) with Re was found, and this is consistent with previous results. This rapid rate of increase is caused by trapping of solute in expansions due to flow separation (eddies). One unsteady (but non-turbulent) flow case (Re=449) was also examined. For this case, the rate of increase of DL with Re was smaller than that observed at lower Re. Velocity fluctuations in this regime lead to increased rates of solute mass transfer between the core flow and separated flow regions, thus diminishing the amount of tailing caused by solute trapping in eddies and thereby reducing longitudinal dispersion. The observed tailing was further explored through analysis of concentration skewness (third moment) and its assymptotic convergence to conventional advection-dispersion behavior (skewness = 0). The method of volume averaging was applied to develop a skewness model, and demonstrated that the skewness decreases as a function of inverse square root of time. Our particle tracking simulation results were shown to conform to this theoretical result in most of the cases considered.

  14. Automated treatment planning for a dedicated multi-source intra-cranial radiosurgery treatment unit accounting for overlapping structures and dose homogeneity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghobadi, Kimia; Ghaffari, Hamid R.; Aleman, Dionne M.

    2013-09-15

    Purpose: The purpose of this work is to advance the two-step approach for Gamma Knife{sup ®} Perfexion™ (PFX) optimization to account for dose homogeneity and overlap between the planning target volume (PTV) and organs-at-risk (OARs).Methods: In the first step, a geometry-based algorithm is used to quickly select isocentre locations while explicitly accounting for PTV-OARs overlaps. In this approach, the PTV is divided into subvolumes based on the PTV-OARs overlaps and the distance of voxels to the overlaps. Only a few isocentres are selected in the overlap volume, and a higher number of isocentres are carefully selected among voxels that aremore » immediately close to the overlap volume. In the second step, a convex optimization is solved to find the optimal combination of collimator sizes and their radiation duration for each isocentre location.Results: This two-step approach is tested on seven clinical cases (comprising 11 targets) for which the authors assess coverage, OARs dose, and homogeneity index and relate these parameters to the overlap fraction for each case. In terms of coverage, the mean V{sub 99} for the gross target volume (GTV) was 99.8% while the V{sub 95} for the PTV averaged at 94.6%, thus satisfying the clinical objectives of 99% for GTV and 95% for PTV, respectively. The mean relative dose to the brainstem was 87.7% of the prescription dose (with maximum 108%), while on average, 11.3% of the PTV overlapped with the brainstem. The mean beam-on time per fraction per dose was 8.6 min with calibration dose rate of 3.5 Gy/min, and the computational time averaged at 205 min. Compared with previous work involving single-fraction radiosurgery, the resulting plans were more homogeneous with average homogeneity index of 1.18 compared to 1.47.Conclusions: PFX treatment plans with homogeneous dose distribution can be achieved by inverse planning using geometric isocentre selection and mathematical modeling and optimization techniques. The quality of the obtained treatment plans are clinically satisfactory while the homogeneity index is improved compared to conventional PFX plans.« less

  15. Truck Traffic Iowa : 2010

    DOT National Transportation Integrated Search

    2011-01-01

    Truck volumes represented on this map are Annual Average Daily Traffic Volumes between major traffic generators: i.e., Highway Junctions and Cities. : Truck volumes include 6-Tire and 3 Axle single unit trucks, buses and all multiple unit trucks.

  16. Prediction of forced expiratory volume in pulmonary function test using radial basis neural networks and k-means clustering.

    PubMed

    Manoharan, Sujatha C; Ramakrishnan, Swaminathan

    2009-10-01

    In this work, prediction of forced expiratory volume in pulmonary function test, carried out using spirometry and neural networks is presented. The pulmonary function data were recorded from volunteers using commercial available flow volume spirometer in standard acquisition protocol. The Radial Basis Function neural networks were used to predict forced expiratory volume in 1 s (FEV1) from the recorded flow volume curves. The optimal centres of the hidden layer of radial basis function were determined by k-means clustering algorithm. The performance of the neural network model was evaluated by computing their prediction error statistics of average value, standard deviation, root mean square and their correlation with the true data for normal, restrictive and obstructive cases. Results show that the adopted neural networks are capable of predicting FEV1 in both normal and abnormal cases. Prediction accuracy was more in obstructive abnormality when compared to restrictive cases. It appears that this method of assessment is useful in diagnosing the pulmonary abnormalities with incomplete data and data with poor recording.

  17. Computer aided detection of ureteral stones in thin slice computed tomography volumes using Convolutional Neural Networks.

    PubMed

    Längkvist, Martin; Jendeberg, Johan; Thunberg, Per; Loutfi, Amy; Lidén, Mats

    2018-06-01

    Computed tomography (CT) is the method of choice for diagnosing ureteral stones - kidney stones that obstruct the ureter. The purpose of this study is to develop a computer aided detection (CAD) algorithm for identifying a ureteral stone in thin slice CT volumes. The challenge in CAD for urinary stones lies in the similarity in shape and intensity of stones with non-stone structures and how to efficiently deal with large high-resolution CT volumes. We address these challenges by using a Convolutional Neural Network (CNN) that works directly on the high resolution CT volumes. The method is evaluated on a large data base of 465 clinically acquired high-resolution CT volumes of the urinary tract with labeling of ureteral stones performed by a radiologist. The best model using 2.5D input data and anatomical information achieved a sensitivity of 100% and an average of 2.68 false-positives per patient on a test set of 88 scans. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Spectral (Finite) Volume Method for Conservation Laws on Unstructured Grids II: Extension to Two Dimensional Scalar Equation

    NASA Technical Reports Server (NTRS)

    Wang, Z. J.; Liu, Yen; Kwak, Dochan (Technical Monitor)

    2002-01-01

    The framework for constructing a high-order, conservative Spectral (Finite) Volume (SV) method is presented for two-dimensional scalar hyperbolic conservation laws on unstructured triangular grids. Each triangular grid cell forms a spectral volume (SV), and the SV is further subdivided into polygonal control volumes (CVs) to supported high-order data reconstructions. Cell-averaged solutions from these CVs are used to reconstruct a high order polynomial approximation in the SV. Each CV is then updated independently with a Godunov-type finite volume method and a high-order Runge-Kutta time integration scheme. A universal reconstruction is obtained by partitioning all SVs in a geometrically similar manner. The convergence of the SV method is shown to depend on how a SV is partitioned. A criterion based on the Lebesgue constant has been developed and used successfully to determine the quality of various partitions. Symmetric, stable, and convergent linear, quadratic, and cubic SVs have been obtained, and many different types of partitions have been evaluated. The SV method is tested for both linear and non-linear model problems with and without discontinuities.

  19. Circulating blood volume determination using electronic spin resonance spectroscopy.

    PubMed

    Facorro, Graciela; Bianchin, Ana; Boccio, José; Hager, Alfredo

    2006-09-01

    There have been numerous methods proposed to measure the circulating blood volume (CBV). Nevertheless, none of them have been massively and routinely accepted in clinical diagnosis. This study describes a simple and rapid method, on a rabbit model, using the dilution of autologous red cells labeled with a nitroxide radical (Iodoacetamide-TEMPO), which can be detected by electronic spin resonance (ESR) spectroscopy. Blood samples were withdrawn and re-injected using the ears' marginal veins. The average CBV measured by the new method/body weight (CBV(IAT)/BW) was 59 +/- 7 mL/kg (n = 33). Simultaneously, blood volume determinations using the nitroxide radical and (51)Cr (CBV(Cr)) were performed. In the plot of the difference between the methods (CBV(IAT) - CBV(Cr)) against the average (CBV(IAT) + CBV(Cr))/2, the mean of the bias was -1.1 +/- 6.9 mL and the limits of agreement (mean difference +/-2 SD) were -14.9 and 12.7 mL. Lin's concordance correlation coefficient p(c) = 0.988. Thus, both methods are in close agreement. The development of a new method that allows a correct estimation of the CBV without using radioactivity, avoiding blood manipulation, and decreasing the possibility of blood contamination with similar accuracy and precision of that of the "gold standard method" is an innovative proposal.

  20. Photoactivated Composite Biomaterial for Soft Tissue Restoration in Rodents and in Humans

    PubMed Central

    Nahas, Zayna; Reid, Branden; Coburn, Jeannine M.; Axelman, Joyce; Chae, Jemin J.; Guo, Qiongyu; Trow, Robert; Thomas, Andrew; Hou, Zhipeng; Lichtsteiner, Serge; Sutton, Damon; Matheson, Christine; Walker, Patricia; David, Nathaniel; Mori, Susumu; Taube, Janis M.; Elisseeff, Jennifer H.

    2015-01-01

    Soft tissue reconstruction often requires multiple surgical procedures that can result in scars and disfiguration. Facial soft tissue reconstruction represents a clinical challenge because even subtle deformities can severely affect an individual’s social and psychological function. We therefore developed a biosynthetic soft tissue replacement composed of poly(ethylene glycol) (PEG) and hyaluronic acid (HA) that can be injected and photocrosslinked in situ with transdermal light exposure. Modulating the ratio of synthetic to biological polymer allowed us to tune implant elasticity and volume persistence. In a small-animal model, implanted photocrosslinked PEG-HA showed a dose-dependent relationship between increasing PEG concentration and enhanced implant volume persistence. In direct comparison with commercial HA injections, the PEG-HA implants maintained significantly greater average volumes and heights. Reversibility of the implant volume was achieved with hyaluronidase injection. Pilot clinical testing in human patients confirmed the feasibility of the transdermal photocrosslinking approach for implantation in abdomen soft tissue, although an inflammatory response was observed surrounding some of the materials. PMID:21795587

  1. Registration of 3D fetal neurosonography and MRI☆

    PubMed Central

    Kuklisova-Murgasova, Maria; Cifor, Amalia; Napolitano, Raffaele; Papageorghiou, Aris; Quaghebeur, Gerardine; Rutherford, Mary A.; Hajnal, Joseph V.; Noble, J. Alison; Schnabel, Julia A.

    2013-01-01

    We propose a method for registration of 3D fetal brain ultrasound with a reconstructed magnetic resonance fetal brain volume. This method, for the first time, allows the alignment of models of the fetal brain built from magnetic resonance images with 3D fetal brain ultrasound, opening possibilities to develop new, prior information based image analysis methods for 3D fetal neurosonography. The reconstructed magnetic resonance volume is first segmented using a probabilistic atlas and a pseudo ultrasound image volume is simulated from the segmentation. This pseudo ultrasound image is then affinely aligned with clinical ultrasound fetal brain volumes using a robust block-matching approach that can deal with intensity artefacts and missing features in the ultrasound images. A qualitative and quantitative evaluation demonstrates good performance of the method for our application, in comparison with other tested approaches. The intensity average of 27 ultrasound images co-aligned with the pseudo ultrasound template shows good correlation with anatomy of the fetal brain as seen in the reconstructed magnetic resonance image. PMID:23969169

  2. Preferred numbers and the distributions of trade sizes and trading volumes in the Chinese stock market

    NASA Astrophysics Data System (ADS)

    Mu, G.-H.; Chen, W.; Kertész, J.; Zhou, W.-X.

    2009-03-01

    The distributions of trade sizes and trading volumes are investigated based on the limit order book data of 22 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. We observe that the size distribution of trades for individualstocks exhibits jumps, which is caused by the number preference of traders when placing orders. We analyze the applicability of the “q-Gamma” function for fitting the distribution by the Cramér-von Mises criterion. The empirical PDFs of tradingvolumes at different timescales Δt ranging from 1 min to 240 min can be well modeled. The applicability of the q-Gamma functions for multiple trades is restricted to the transaction numbers Δn≤ 8. We find that all the PDFs have power-law tails for large volumes. Using careful estimation of the average tail exponents α of the distributions of trade sizes and trading volumes, we get α> 2, well outside the Lévy regime.

  3. Prediction of future asset prices

    NASA Astrophysics Data System (ADS)

    Seong, Ng Yew; Hin, Pooi Ah; Ching, Soo Huei

    2014-12-01

    This paper attempts to incorporate trading volumes as an additional predictor for predicting asset prices. Denoting r(t) as the vector consisting of the time-t values of the trading volume and price of a given asset, we model the time-(t+1) asset price to be dependent on the present and l-1 past values r(t), r(t-1), ....., r(t-1+1) via a conditional distribution which is derived from a (2l+1)-dimensional power-normal distribution. A prediction interval based on the 100(α/2)% and 100(1-α/2)% points of the conditional distribution is then obtained. By examining the average lengths of the prediction intervals found by using the composite indices of the Malaysia stock market for the period 2008 to 2013, we found that the value 2 appears to be a good choice for l. With the omission of the trading volume in the vector r(t), the corresponding prediction interval exhibits a slightly longer average length, showing that it might be desirable to keep trading volume as a predictor. From the above conditional distribution, the probability that the time-(t+1) asset price will be larger than the time-t asset price is next computed. When the probability differs from 0 (or 1) by less than 0.03, the observed time-(t+1) increase in price tends to be negative (or positive). Thus the above probability has a good potential of being used as a market indicator in technical analysis.

  4. Community violence exposure in early adolescence: Longitudinal associations with hippocampal and amygdala volume and resting state connectivity.

    PubMed

    Saxbe, Darby; Khoddam, Hannah; Piero, Larissa Del; Stoycos, Sarah A; Gimbel, Sarah I; Margolin, Gayla; Kaplan, Jonas T

    2018-06-11

    Community violence exposure is a common stressor, known to compromise youth cognitive and emotional development. In a diverse, urban sample of 22 adolescents, participants reported on community violence exposure (witnessing a beating or illegal drug use, hearing gun shots, or other forms of community violence) in early adolescence (average age 12.99), and underwent a neuroimaging scan 3-5 years later (average age 16.92). Community violence exposure in early adolescence predicted smaller manually traced left and right hippocampal and amygdala volumes in a model controlling for age, gender, and concurrent community violence exposure, measured in late adolescence. Community violence continued to predict hippocampus (but not amygdala) volumes after we also controlled for family aggression exposure in early adolescence. Community violence exposure was also associated with stronger resting state connectivity between the right hippocampus (using the manually traced structure as a seed region) and bilateral frontotemporal regions including the superior temporal gyrus and insula. These resting state connectivity results held after controlling for concurrent community violence exposure, SES, and family aggression. Although this is the first study focusing on community violence in conjunction with brain structure and function, these results dovetail with other research linking childhood adversity with smaller subcortical volumes in adolescence and adulthood, and with altered frontolimbic resting state connectivity. Our findings suggest that even community-level exposure to neighborhood violence can have detectable neural correlates in adolescents. © 2018 John Wiley & Sons Ltd.

  5. Computer model of Raritan River Basin water-supply system in central New Jersey

    USGS Publications Warehouse

    Dunne, Paul; Tasker, Gary D.

    1996-01-01

    This report describes a computer model of the Raritan River Basin water-supply system in central New Jersey. The computer model provides a technical basis for evaluating the effects of alternative patterns of operation of the Raritan River Basin water-supply system during extended periods of below-average precipitation. The computer model is a continuity-accounting model consisting of a series of interconnected nodes. At each node, the inflow volume, outflow volume, and change in storage are determined and recorded for each month. The model runs with a given set of operating rules and water-use requirements including releases, pumpages, and diversions. The model can be used to assess the hypothetical performance of the Raritan River Basin water- supply system in past years under alternative sets of operating rules. It also can be used to forecast the likelihood of specified outcomes, such as the depletion of reservoir contents below a specified threshold or of streamflows below statutory minimum passing flows, for a period of up to 12 months. The model was constructed on the basis of current reservoir capacities and the natural, unregulated monthly runoff values recorded at U.S. Geological Survey streamflow- gaging stations in the basin.

  6. Planned Subtotal Resection of Vestibular Schwannoma Differs from the Ideal Radiosurgical Target Defined by Adaptive Hybrid Surgery.

    PubMed

    Sheppard, John P; Lagman, Carlito; Prashant, Giyarpuram N; Alkhalid, Yasmine; Nguyen, Thien; Duong, Courtney; Udawatta, Methma; Gaonkar, Bilwaj; Tenn, Stephen E; Bloch, Orin; Yang, Isaac

    2018-06-01

    To retrospectively compare ideal radiosurgical target volumes defined by a manual method (surgeon) to those determined by Adaptive Hybrid Surgery (AHS) operative planning software in 7 patients with vestibular schwannoma (VS). Four attending surgeons (3 neurosurgeons and 1 ear, nose, and throat surgeon) manually contoured planned residual tumors volumes for 7 consecutive patients with VS. Next, the AHS software determined the ideal radiosurgical target volumes based on a specified radiotherapy plan. Our primary measure was the difference between the average planned residual tumor volumes and the ideal radiosurgical target volumes defined by AHS (dRV AHS-planned ). We included 7 consecutive patients with VS in this study. The planned residual tumor volumes were smaller than the ideal radiosurgical target volumes defined by AHS (1.6 vs. 4.5 cm 3 , P = 0.004). On average, the actual post-operative residual tumor volumes were smaller than the ideal radiosurgical target volumes defined by AHS (2.2 cm 3 vs. 4.5 cm 3 ; P = 0.02). The average difference between the ideal radiosurgical target volume defined by AHS and the planned residual tumor volume (dRV AHS-planned ) was 2.9 ± 1.7 cm 3 , and we observed a trend toward larger dRV AHS-planned in patients who lost serviceable facial nerve function compared with patients who maintained serviceable facial nerve function (4.7 cm 3 vs. 1.9 cm 3 ; P = 0.06). Planned subtotal resection of VS diverges from the ideal radiosurgical target defined by AHS, but whether that influences clinical outcomes is unclear. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. SU-F-BRD-01: A Logistic Regression Model to Predict Objective Function Weights in Prostate Cancer IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boutilier, J; Chan, T; Lee, T

    2014-06-15

    Purpose: To develop a statistical model that predicts optimization objective function weights from patient geometry for intensity-modulation radiotherapy (IMRT) of prostate cancer. Methods: A previously developed inverse optimization method (IOM) is applied retrospectively to determine optimal weights for 51 treated patients. We use an overlap volume ratio (OVR) of bladder and rectum for different PTV expansions in order to quantify patient geometry in explanatory variables. Using the optimal weights as ground truth, we develop and train a logistic regression (LR) model to predict the rectum weight and thus the bladder weight. Post hoc, we fix the weights of the leftmore » femoral head, right femoral head, and an artificial structure that encourages conformity to the population average while normalizing the bladder and rectum weights accordingly. The population average of objective function weights is used for comparison. Results: The OVR at 0.7cm was found to be the most predictive of the rectum weights. The LR model performance is statistically significant when compared to the population average over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and mean voxel dose to the bladder, rectum, CTV, and PTV. On average, the LR model predicted bladder and rectum weights that are both 63% closer to the optimal weights compared to the population average. The treatment plans resulting from the LR weights have, on average, a rectum V70Gy that is 35% closer to the clinical plan and a bladder V70Gy that is 43% closer. Similar results are seen for bladder V54Gy and rectum V54Gy. Conclusion: Statistical modelling from patient anatomy can be used to determine objective function weights in IMRT for prostate cancer. Our method allows the treatment planners to begin the personalization process from an informed starting point, which may lead to more consistent clinical plans and reduce overall planning time.« less

  8. Improved biovolume estimation of Microcystis aeruginosa colonies: A statistical approach.

    PubMed

    Alcántara, I; Piccini, C; Segura, A M; Deus, S; González, C; Martínez de la Escalera, G; Kruk, C

    2018-05-27

    The Microcystis aeruginosa complex (MAC) clusters many of the most common freshwater and brackish bloom-forming cyanobacteria. In monitoring protocols, biovolume estimation is a common approach to determine MAC colonies biomass and useful for prediction purposes. Biovolume (μm 3 mL -1 ) is calculated multiplying organism abundance (orgL -1 ) by colonial volume (μm 3 org -1 ). Colonial volume is estimated based on geometric shapes and requires accurate measurements of dimensions using optical microscopy. A trade-off between easy-to-measure but low-accuracy simple shapes (e.g. sphere) and time costly but high-accuracy complex shapes (e.g. ellipsoid) volume estimation is posed. Overestimations effects in ecological studies and management decisions associated to harmful blooms are significant due to the large sizes of MAC colonies. In this work, we aimed to increase the precision of MAC biovolume estimations by developing a statistical model based on two easy-to-measure dimensions. We analyzed field data from a wide environmental gradient (800 km) spanning freshwater to estuarine and seawater. We measured length, width and depth from ca. 5700 colonies under an inverted microscope and estimated colonial volume using three different recommended geometrical shapes (sphere, prolate spheroid and ellipsoid). Because of the non-spherical shape of MAC the ellipsoid resulted in the most accurate approximation, whereas the sphere overestimated colonial volume (3-80) especially for large colonies (MLD higher than 300 μm). Ellipsoid requires measuring three dimensions and is time-consuming. Therefore, we constructed different statistical models to predict organisms depth based on length and width. Splitting the data into training (2/3) and test (1/3) sets, all models resulted in low training (1.41-1.44%) and testing average error (1.3-2.0%). The models were also evaluated using three other independent datasets. The multiple linear model was finally selected to calculate MAC volume as an ellipsoid based on length and width. This work contributes to achieve a better estimation of MAC volume applicable to monitoring programs as well as to ecological research. Copyright © 2017. Published by Elsevier B.V.

  9. Weighted south-wide average pulpwood prices

    Treesearch

    James E. Granskog; Kevin D. Growther

    1991-01-01

    Weighted average prices provide a more accurate representation of regional pulpwood price trends when production volumes valy widely by state. Unweighted South-wide average delivered prices for pulpwood, as reported by Timber Mart-South, were compared to average annual prices weighted by each state's pulpwood production from 1977 to 1986. Weighted average prices...

  10. Some comments on thermodynamic consistency for equilibrium mixture equations of state

    DOE PAGES

    Grove, John W.

    2018-03-28

    We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.

  11. Numerical Simulation of Two Dimensional Flows in Yazidang Reservoir

    NASA Astrophysics Data System (ADS)

    Huang, Lingxiao; Liu, Libo; Sun, Xuehong; Zheng, Lanxiang; Jing, Hefang; Zhang, Xuande; Li, Chunguang

    2018-01-01

    This paper studied the problem of water flow in the Yazid Ang reservoir. It built 2-D RNG turbulent model, rated the boundary conditions, used the finite volume method to discrete equations and divided the grid by the advancing-front method. It simulated the two conditions of reservoir flow field, compared the average vertical velocity of the simulated value and the measured value nearby the water inlet and the water intake. The results showed that the mathematical model could be applied to the similar industrial water reservoir.

  12. Coupled discrete element and finite volume solution of two classical soil mechanics problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Feng; Drumm, Eric; Guiochon, Georges A

    One dimensional solutions for the classic critical upward seepage gradient/quick condition and the time rate of consolidation problems are obtained using coupled routines for the finite volume method (FVM) and discrete element method (DEM), and the results compared with the analytical solutions. The two phase flow in a system composed of fluid and solid is simulated with the fluid phase modeled by solving the averaged Navier-Stokes equation using the FVM and the solid phase is modeled using the DEM. A framework is described for the coupling of two open source computer codes: YADE-OpenDEM for the discrete element method and OpenFOAMmore » for the computational fluid dynamics. The particle-fluid interaction is quantified using a semi-empirical relationship proposed by Ergun [12]. The two classical verification problems are used to explore issues encountered when using coupled flow DEM codes, namely, the appropriate time step size for both the fluid and mechanical solution processes, the choice of the viscous damping coefficient, and the number of solid particles per finite fluid volume.« less

  13. PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 3: Programmer's reference

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady

    1990-01-01

    A new computer code was developed to solve the 2-D or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating-direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 3 is the Programmer's Reference, and describes the program structure, the FORTRAN variables stored in common blocks, and the details of each subprogram.

  14. Comparison of optimization strategy and similarity metric in atlas-to-subject registration using statistical deformation model

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Murphy, R. J.; Grupp, R. B.; Sato, Y.; Taylor, R. H.; Armand, M.

    2015-03-01

    A robust atlas-to-subject registration using a statistical deformation model (SDM) is presented. The SDM uses statistics of voxel-wise displacement learned from pre-computed deformation vectors of a training dataset. This allows an atlas instance to be directly translated into an intensity volume and compared with a patient's intensity volume. Rigid and nonrigid transformation parameters were simultaneously optimized via the Covariance Matrix Adaptation - Evolutionary Strategy (CMA-ES), with image similarity used as the objective function. The algorithm was tested on CT volumes of the pelvis from 55 female subjects. A performance comparison of the CMA-ES and Nelder-Mead downhill simplex optimization algorithms with the mutual information and normalized cross correlation similarity metrics was conducted. Simulation studies using synthetic subjects were performed, as well as leave-one-out cross validation studies. Both studies suggested that mutual information and CMA-ES achieved the best performance. The leave-one-out test demonstrated 4.13 mm error with respect to the true displacement field, and 26,102 function evaluations in 180 seconds, on average.

  15. Controls on Arctic sea ice from first-year and multi-year ice survival rates

    NASA Astrophysics Data System (ADS)

    Armour, K.; Bitz, C. M.; Hunke, E. C.; Thompson, L.

    2009-12-01

    The recent decrease in Arctic sea ice cover has transpired with a significant loss of multi-year (MY) ice. The transition to an Arctic that is populated by thinner first-year (FY) sea ice has important implications for future trends in area and volume. We develop a reduced model for Arctic sea ice with which we investigate how the survivability of FY and MY ice control various aspects of the sea-ice system. We demonstrate that Arctic sea-ice area and volume behave approximately as first-order autoregressive processes, which allows for a simple interpretation of September sea-ice in which its mean state, variability, and sensitivity to climate forcing can be described naturally in terms of the average survival rates of FY and MY ice. This model, used in concert with a sea-ice simulation that traces FY and MY ice areas to estimate the survival rates, reveals that small trends in the ice survival rates explain the decline in total Arctic ice area, and the relatively larger loss of MY ice area, over the period 1979-2006. Additionally, our model allows for a calculation of the persistence time scales of September area and volume anomalies. A relatively short memory time scale for ice area (~ 1 year) implies that Arctic ice area is nearly in equilibrium with long-term climate forcing at all times, and therefore observed trends in area are a clear indication of a changing climate. A longer memory time scale for ice volume (~ 5 years) suggests that volume can be out of equilibrium with climate forcing for long periods of time, and therefore trends in ice volume are difficult to distinguish from its natural variability. With our reduced model, we demonstrate the connection between memory time scale and sensitivity to climate forcing, and discuss the implications that a changing memory time scale has on the trajectory of ice area and volume in a warming climate. Our findings indicate that it is unlikely that a “tipping point” in September ice area and volume will be reached as the climate is further warmed. Finally, we suggest novel model validation techniques based upon comparing the characteristics of FY and MY ice within models to observations. We propose that keeping an account of FY and MY ice area within sea ice models offers a powerful new way to evaluate model projections of sea ice in a greenhouse warming climate.

  16. Is the Surface Potential Integral of a Dipole in a Volume Conductor Always Zero? A Cloud Over the Average Reference of EEG and ERP.

    PubMed

    Yao, Dezhong

    2017-03-01

    Currently, average reference is one of the most widely adopted references in EEG and ERP studies. The theoretical assumption is the surface potential integral of a volume conductor being zero, thus the average of scalp potential recordings might be an approximation of the theoretically desired zero reference. However, such a zero integral assumption has been proved only for a spherical surface. In this short communication, three counter-examples are given to show that the potential integral over the surface of a dipole in a volume conductor may not be zero. It depends on the shape of the conductor and the orientation of the dipole. This fact on one side means that average reference is not a theoretical 'gold standard' reference, and on the other side reminds us that the practical accuracy of average reference is not only determined by the well-known electrode array density and its coverage but also intrinsically by the head shape. It means that reference selection still is a fundamental problem to be fixed in various EEG and ERP studies.

  17. Distribution and size of mucous glands in the ferret tracheobronchial tree.

    PubMed

    Hajighasemi-Ossareh, Mohammad; Borthwell, Rachel M; Lachowicz-Scroggins, Marrah; Stevens, Jeremy E; Finkbeiner, Walter E; Widdicombe, Jonathan H

    2013-11-01

    A transgenic ferret model of cystic fibrosis has recently been generated. It is probable that malfunction of airway mucous glands contributes significantly to the airway pathology of this disease. The usefulness of the ferret model may therefore depend in part on how closely the airway glands of ferrets resemble those of humans. Here, we show that in the ferret trachea glands are commonest in its most ventral aspect and disappear about half way up the lateral walls; they are virtually absent from the dorsal membranous portion. Further, the aggregate volume of glands per unit mucosal surface declines progressively by about 60% between the larynx and the carina. The average frequency of glands openings for the ferret trachea as a whole is only about one-fifth that in humans (where gland openings are found at approximately the same frequency throughout the trachea). Glands in the ferret trachea are on average about one-third the size of those in the human. Therefore, the aggregate volume of tracheal glands (per unit mucosal surface area) in the ferret is only about 6% that in humans. As in other mammalian species, airway glands in the ferret disappear at an airway internal diameter of ∼1 mm, corresponding approximately in this species to airway generation 6. Copyright © 2013 Wiley Periodicals, Inc.

  18. 76 FR 76799 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-08

    ... average daily volume (``CADV''). The text of the proposed rule change is available at the Exchange, the... Proposed Rule Change To Amend NYSE Rule 104(a)(1)(A) To Reflect That Designated Market Maker Unit Quoting Requirements Are Based on Consolidated Average Daily Volume December 2, 2011. Pursuant to Section 19(b)(1) of...

  19. HYDRODYNAMIC SIMULATION OF THE UPPER POTOMAC ESTUARY.

    USGS Publications Warehouse

    Schaffranck, Raymond W.

    1986-01-01

    Hydrodynamics of the upper extent of the Potomac Estuary between Indian Head and Morgantown, Md. , are simulated using a two-dimensional model. The model computes water-surface elevations and depth-averaged velocities by numerically integrating finite-difference forms of the equations of mass and momentum conservation using the alternating direction implicit method. The fundamental, non-linear, unsteady-flow equations, upon which the model is formulated, include additional terms to account for Coriolis acceleration and meteorological influences. Preliminary model/prototype data comparisons show agreement to within 9% for tidal flow volumes and phase differences within the measured-data-recording interval. Use of the model to investigate the hydrodynamics and certain aspects of transport within this Potomac Estuary reach is demonstrated. Refs.

  20. Quantitative Radiology: Automated CT Liver Volumetry Compared With Interactive Volumetry and Manual Volumetry

    PubMed Central

    Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Garg, Shailesh; Hori, Masatoshi; Oto, Aytekin; Baron, Richard L.

    2014-01-01

    OBJECTIVE The purpose of this study was to evaluate automated CT volumetry in the assessment of living-donor livers for transplant and to compare this technique with software-aided interactive volumetry and manual volumetry. MATERIALS AND METHODS Hepatic CT scans of 18 consecutively registered prospective liver donors were obtained under a liver transplant protocol. Automated liver volumetry was developed on the basis of 3D active-contour segmentation. To establish reference standard liver volumes, a radiologist manually traced the contour of the liver on each CT slice. We compared the results obtained with automated and interactive volumetry with those obtained with the reference standard for this study, manual volumetry. RESULTS The average interactive liver volume was 1553 ± 343 cm3, and the average automated liver volume was 1520 ± 378 cm3. The average manual volume was 1486 ± 343 cm3. Both interactive and automated volumetric results had excellent agreement with manual volumetric results (intraclass correlation coefficients, 0.96 and 0.94). The average user time for automated volumetry was 0.57 ± 0.06 min/case, whereas those for interactive and manual volumetry were 27.3 ± 4.6 and 39.4 ± 5.5 min/case, the difference being statistically significant (p < 0.05). CONCLUSION Both interactive and automated volumetry are accurate for measuring liver volume with CT, but automated volumetry is substantially more efficient. PMID:21940543

  1. Quantitative radiology: automated CT liver volumetry compared with interactive volumetry and manual volumetry.

    PubMed

    Suzuki, Kenji; Epstein, Mark L; Kohlbrenner, Ryan; Garg, Shailesh; Hori, Masatoshi; Oto, Aytekin; Baron, Richard L

    2011-10-01

    The purpose of this study was to evaluate automated CT volumetry in the assessment of living-donor livers for transplant and to compare this technique with software-aided interactive volumetry and manual volumetry. Hepatic CT scans of 18 consecutively registered prospective liver donors were obtained under a liver transplant protocol. Automated liver volumetry was developed on the basis of 3D active-contour segmentation. To establish reference standard liver volumes, a radiologist manually traced the contour of the liver on each CT slice. We compared the results obtained with automated and interactive volumetry with those obtained with the reference standard for this study, manual volumetry. The average interactive liver volume was 1553 ± 343 cm(3), and the average automated liver volume was 1520 ± 378 cm(3). The average manual volume was 1486 ± 343 cm(3). Both interactive and automated volumetric results had excellent agreement with manual volumetric results (intraclass correlation coefficients, 0.96 and 0.94). The average user time for automated volumetry was 0.57 ± 0.06 min/case, whereas those for interactive and manual volumetry were 27.3 ± 4.6 and 39.4 ± 5.5 min/case, the difference being statistically significant (p < 0.05). Both interactive and automated volumetry are accurate for measuring liver volume with CT, but automated volumetry is substantially more efficient.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, D; Aryal, M; Samuels, S

    Purpose: A previous study showed that large sub-volumes of tumor with low blood volume (BV) (poorly perfused) in head-and-neck (HN) cancers are significantly associated with local-regional failure (LRF) after chemoradiation therapy, and could be targeted with intensified radiation doses. This study aimed to develop an automated and scalable model to extract voxel-wise contrast-enhanced temporal features of dynamic contrastenhanced (DCE) MRI in HN cancers for predicting LRF. Methods: Our model development consists of training and testing stages. The training stage includes preprocessing of individual-voxel DCE curves from tumors for intensity normalization and temporal alignment, temporal feature extraction from the curves, featuremore » selection, and training classifiers. For feature extraction, multiresolution Haar discrete wavelet transformation is applied to each DCE curve to capture temporal contrast-enhanced features. The wavelet coefficients as feature vectors are selected. Support vector machine classifiers are trained to classify tumor voxels having either low or high BV, for which a BV threshold of 7.6% is previously established and used as ground truth. The model is tested by a new dataset. The voxel-wise DCE curves for training and testing were from 14 and 8 patients, respectively. A posterior probability map of the low BV class was created to examine the tumor sub-volume classification. Voxel-wise classification accuracy was computed to evaluate performance of the model. Results: Average classification accuracies were 87.2% for training (10-fold crossvalidation) and 82.5% for testing. The lowest and highest accuracies (patient-wise) were 68.7% and 96.4%, respectively. Posterior probability maps of the low BV class showed the sub-volumes extracted by our model similar to ones defined by the BV maps with most misclassifications occurred near the sub-volume boundaries. Conclusion: This model could be valuable to support adaptive clinical trials with further validation. The framework could be extendable and scalable to extract temporal contrastenhanced features of DCE-MRI in other tumors. We would like to acknowledge NIH for funding support: UO1 CA183848.« less

  3. A statistical model describing combined irreversible electroporation and electroporation-induced blood-brain barrier disruption.

    PubMed

    Sharabi, Shirley; Kos, Bor; Last, David; Guez, David; Daniels, Dianne; Harnof, Sagi; Mardor, Yael; Miklavcic, Damijan

    2016-03-01

    Electroporation-based therapies such as electrochemotherapy (ECT) and irreversible electroporation (IRE) are emerging as promising tools for treatment of tumors. When applied to the brain, electroporation can also induce transient blood-brain-barrier (BBB) disruption in volumes extending beyond IRE, thus enabling efficient drug penetration. The main objective of this study was to develop a statistical model predicting cell death and BBB disruption induced by electroporation. This model can be used for individual treatment planning. Cell death and BBB disruption models were developed based on the Peleg-Fermi model in combination with numerical models of the electric field. The model calculates the electric field thresholds for cell kill and BBB disruption and describes the dependence on the number of treatment pulses. The model was validated using in vivo experimental data consisting of rats brains MRIs post electroporation treatments. Linear regression analysis confirmed that the model described the IRE and BBB disruption volumes as a function of treatment pulses number (r(2) = 0.79; p < 0.008, r(2) = 0.91; p < 0.001). The results presented a strong plateau effect as the pulse number increased. The ratio between complete cell death and no cell death thresholds was relatively narrow (between 0.88-0.91) even for small numbers of pulses and depended weakly on the number of pulses. For BBB disruption, the ratio increased with the number of pulses. BBB disruption radii were on average 67% ± 11% larger than IRE volumes. The statistical model can be used to describe the dependence of treatment-effects on the number of pulses independent of the experimental setup.

  4. Association of Socioeconomic and Geographic Factors With Google Trends for Tanning and Sunscreen.

    PubMed

    Seth, Divya; Gittleman, Haley; Barnholtz-Sloan, Jill; Bordeaux, Jeremy S

    2018-02-01

    Internet search trends are used to track both infectious diseases and noncommunicable conditions. The authors sought to characterize Google Trends search volume index (SVI) for the terms "sunscreen" and tanning ("tanning salon" and "tanning bed") in the United States from 2010 to 2015 and analyze association with educational attainment, average income, and percent white data by state. SVI is search frequency data relative to total search volume. Analysis of variance, univariate, and multivariate analyses were performed to assess seasonal variations in SVI and the association of state-level SVI with state latitudes and census data. Hawaii had the highest SVI for sunscreen searches, whereas Alaska had the lowest. West Virginia had the highest SVI for tanning searches, whereas Hawaii had the lowest. There were significant differences between seasonal SVI for sunscreen and tanning searches (p < .001). Sunscreen SVI by state was correlated with an increase in educational attainment and average income, and a decrease in latitude (p < .05) in a multivariate model. Tanning SVI was correlated with a decrease in educational attainment and average income, and an increase in latitude (p < .05). Internet search trends for sunscreen and tanning are influenced by socioeconomic factors, and could be a tool for skin-related public health.

  5. Combined Recipe for Clinical Target Volume and Planning Target Volume Margins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stroom, Joep, E-mail: joep.stroom@fundacaochampalimaud.pt; Gilhuijs, Kenneth; Vieira, Sandra

    2014-03-01

    Purpose: To develop a combined recipe for clinical target volume (CTV) and planning target volume (PTV) margins. Methods and Materials: A widely accepted PTV margin recipe is M{sub geo} = aΣ{sub geo} + bσ{sub geo}, with Σ{sub geo} and σ{sub geo} standard deviations (SDs) representing systematic and random geometric uncertainties, respectively. On the basis of histopathology data of breast and lung tumors, we suggest describing the distribution of microscopic islets around the gross tumor volume (GTV) by a half-Gaussian with SD Σ{sub micro}, yielding as possible CTV margin recipe: M{sub micro} = ƒ(N{sub i}) × Σ{sub micro}, with N{sub i}more » the average number of microscopic islets per patient. To determine ƒ(N{sub i}), a computer model was developed that simulated radiation therapy of a spherical GTV with isotropic distribution of microscopic disease in a large group of virtual patients. The minimal margin that yielded D{sub min} <95% in maximally 10% of patients was calculated for various Σ{sub micro} and N{sub i}. Because Σ{sub micro} is independent of Σ{sub geo}, we propose they should be added quadratically, yielding for a combined GTV-to-PTV margin recipe: M{sub GTV-PTV} = √([aΣ{sub geo}]{sup 2} + [ƒ(N{sub i})Σ{sub micro}]{sup 2}) + bσ{sub geo}. This was validated by the computer model through numerous simultaneous simulations of microscopic and geometric uncertainties. Results: The margin factor ƒ(N{sub i}) in a relevant range of Σ{sub micro} and N{sub i} can be given by: ƒ(N{sub i}) = 1.4 + 0.8log(N{sub i}). Filling in the other factors found in our simulations (a = 2.1 and b = 0.8) yields for the combined recipe: M{sub GTV-PTV} = √((2.1Σ{sub geo}){sup 2} + ([1.4 + 0.8log(N{sub i})] × Σ{sub micro}){sup 2}) + 0.8σ{sub geo}. The average margin difference between the simultaneous simulations and the above recipe was 0.2 ± 0.8 mm (1 SD). Calculating M{sub geo} and M{sub micro} separately and adding them linearly overestimated PTVs by on average 5 mm. Margin recipes based on tumor control probability (TCP) instead of D{sub min} criteria yielded similar results. Conclusions: A general recipe for GTV-to-PTV margins is proposed, which shows that CTV and PTV margins should be added in quadrature instead of linearly.« less

  6. On the construction of a time base and the elimination of averaging errors in proxy records

    NASA Astrophysics Data System (ADS)

    Beelaerts, V.; De Ridder, F.; Bauwens, M.; Schmitz, N.; Pintelon, R.

    2009-04-01

    Proxies are sources of climate information which are stored in natural archives (e.g. ice-cores, sediment layers on ocean floors and animals with calcareous marine skeletons). Measuring these proxies produces very short records and mostly involves sampling solid substrates, which is subject to the following two problems: Problem 1: Natural archives are equidistantly sampled at a distance grid along their accretion axis. Starting from these distance series, a time series needs to be constructed, as comparison of different data records is only meaningful on a time grid. The time series will be non-equidistant, as the accretion rate is non-constant. Problem 2: A typical example of sampling solid substrates is drilling. Because of the dimensions of the drill, the holes drilled will not be infinitesimally small. Consequently, samples are not taken at a point in distance, but rather over a volume in distance. This holds for most sampling methods in solid substrates. As a consequence, when the continuous proxy signal is sampled, it will be averaged over the volume of the sample, resulting in an underestimation of the amplitude. Whether this averaging effect is significant, depends on the volume of the sample and the variations of interest of the proxy signal. Starting from the measured signal, the continuous signal needs to be reconstructed in order eliminate these averaging errors. The aim is to provide an efficient identification algorithm to identify the non-linearities in the distance-time relationship, called time base distortions, and to correct for the averaging effects. Because this is a parametric method, an assumption about the proxy signal needs to be made: the proxy record on a time base is assumed to be harmonic, this is an obvious assumption because natural archives often exhibit a seasonal cycle. In a first approach the averaging effects are assumed to be in one direction only, i.e. the direction of the axis on which the measurements were performed. The measured averaged proxy signal is modeled by following signal model: -- Δ ∫ n+12Δδ- y(n,θ) = δ- 1Δ- y(m,θ)dm n-2 δ where m is the position, x(m) = Δm; θ are the unknown parameters and y(m,θ) is the proxy signal we want to identify (the proxy signal as found in the natural archive), which we model as: y(m, θ) = A +∑H [A sin(kωt(m ))+ A cos(kωt(m ))] 0 k=1 k k+H With t(m): t(m) = mTS + g(m )TS Here TS = 1/fS is the sampling period, fS the sampling frequency, and g(m) the unknown time base distortion (TBD). In this work a splines approximation of the TBD is chosen: ∑ g(m ) = b blφl(m ) l=1 where, b is a vector of unknown time base distortion parameters, and φ is a set of splines. The estimates of the unknown parameters were obtained with a nonlinear least squares algorithm. The vessel density measured in the mangrove tree R mucronata was used to illustrate the method. The vessel density is a proxy for the rain fall in tropical regions. The proxy data on the newly constructed time base showed a yearly periodicity, this is what we expected and the correction for the averaging effect increased the amplitude by 11.18%.

  7. Breath stacking in children with neuromuscular disorders.

    PubMed

    Jenkins, H M; Stocki, A; Kriellaars, D; Pasterkamp, H

    2014-06-01

    Respiratory muscle weakness in neuromuscular disorders (NMD) can lead to shallow breathing and respiratory insufficiency over time. Children with NMD often cannot perform maneuvers to recruit lung volume. In adults, breath stacking with a mask and one-way valve can achieve significantly increased lung volumes. To evaluate involuntary breath stacking (IBS) in NMD, we studied 23 children of whom 15 were cognitively aware and able to communicate verbally. For IBS, a one-way valve and pneumotachograph were attached to a face mask. Tidal volumes (Vt) and minute ventilation (VE ) were calculated from airflow over 30 sec before and after 15 sec of expiratory valve closure. Six cooperative male subjects with Duchenne muscular dystrophy (DMD) participated in a subsequent comparison of IBS with voluntary breath stacking (VBS) and supported breath stacking (SBS). The average Vt in those studied with IBS was 277 ml (range 29-598 ml). The average increase in volume by stacking was 599 ml (range -140 to 2,916 ml) above Vt . The average number of stacked breaths was 4.5 (range 0-17). VE increased on average by 18% after stacking (P < 0.05, paired t-test). Oxygen saturation did not change after stacking. Four of the 23 children did not breath stack. Compared to IBS, VBS achieved similar volumes in the six subjects with DMD but SBS was more successful in those with greatest muscle weakness. IBS may achieve breath volumes of approximately three times Vt and may be particularly useful in non-cooperative subjects with milder degrees of respiratory muscle weakness. © 2013 Wiley Periodicals, Inc.

  8. Ablation dynamics - from absorption to heat accumulation/ultra-fast laser matter interaction

    NASA Astrophysics Data System (ADS)

    Kramer, Thorsten; Remund, Stefan; Jäggi, Beat; Schmid, Marc; Neuenschwander, Beat

    2018-05-01

    Ultra-short laser radiation is used in manifold industrial applications today. Although state-of-the-art laser sources are providing an average power of 10-100 W with repetition rates of up to several megahertz, most applications do not benefit from it. On the one hand, the processing speed is limited to some hundred millimeters per second by the dynamics of mechanical axes or galvanometric scanners. On the other hand, high repetition rates require consideration of new physical effects such as heat accumulation and shielding that might reduce the process efficiency. For ablation processes, process efficiency can be expressed by the specific removal rate, ablated volume per time, and average power. The analysis of the specific removal rate for different laser parameters, like average power, repetition rate or pulse duration, and process parameters, like scanning speed or material, can be used to find the best operation point for microprocessing applications. Analytical models and molecular dynamics simulations based on the so-called two-temperature model reveal the causes for the appearance of limiting physical effects. The findings of models and simulations can be used to take advantage and optimize processing strategies.

  9. Using Mobile Device Samples to Estimate Traffic Volumes

    DOT National Transportation Integrated Search

    2017-12-01

    In this project, TTI worked with StreetLight Data to evaluate a beta version of its traffic volume estimates derived from global positioning system (GPS)-based mobile devices. TTI evaluated the accuracy of average annual daily traffic (AADT) volume :...

  10. Variation in neoadjuvant chemotherapy utilization for epithelial ovarian cancer at high volume hospitals in the United States and associated survival.

    PubMed

    Barber, Emma L; Dusetzina, Stacie B; Stitzenberg, Karyn B; Rossi, Emma C; Gehrig, Paola A; Boggess, John F; Garrett, Joanne M

    2017-06-01

    To estimate variation in the use of neoadjuvant chemotherapy by high volume hospitals and to determine the association between hospital utilization of neoadjuvant chemotherapy and survival. We identified incident cases of stage IIIC or IV epithelial ovarian cancer in the National Cancer Database from 2006 to 2012. Inclusion criteria were treatment at a high volume hospital (>20 cases/year) and treatment with both chemotherapy and surgery. A logistic regression model was used to predict receipt of neoadjuvant chemotherapy based on case-mix predictors (age, comorbidities, stage etc). Hospitals were categorized by the observed-to-expected ratio for neoadjuvant chemotherapy use as low, average, or high utilization hospitals. Survival analysis was performed. We identified 11,574 patients treated at 55 high volume hospitals. Neoadjuvant chemotherapy was used for 21.6% (n=2494) of patients and use varied widely by hospital, from 5%-55%. High utilization hospitals (n=1910, 10 hospitals) had a median neoadjuvant chemotherapy rate of 39% (range 23-55%), while low utilization hospitals (n=2671, 14 hospitals) had a median rate of 10% (range 5-17%). For all ovarian cancer patients adjusting for clinical and socio-demographic factors, treatment at a hospital with average or high neoadjuvant chemotherapy utilization was associated with a decreased rate of death compared to treatment at a low utilization hospital (HR 0.90 95% CI 0.83-0.97 and HR 0.85 95% CI 0.75-0.95). Wide variation exists in the utilization of neoadjuvant chemotherapy to treat stage IIIC and IV epithelial ovarian cancer even among high volume hospitals. Patients treated at hospitals with low rates of neoadjuvant chemotherapy utilization experience decreased survival. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Variation in Neoadjuvant Chemotherapy Utilization for Epithelial Ovarian Cancer at High Volume Hospitals in the United States and Associated Survival

    PubMed Central

    Barber, Emma L; Dusetzina, Stacie B; Stitzenberg, Karyn B; Rossi, Emma C; Gehrig, Paola A; Boggess, John F; Garrett, Joanne M

    2017-01-01

    Objective To estimate variation in the use of neoadjuvant chemotherapy by high volume hospitals and to determine the association between hospital utilization of neoadjuvant chemotherapy and survival. Methods We identified incident cases of stage IIIC or IV epithelial ovarian cancer in the National Cancer Database from 2006–2012. Inclusion criteria were treatment at a high volume hospital (>20 cases/yr) and treatment with both chemotherapy and surgery. A logistic regression model was used to predict receipt of neoadjuvant chemotherapy based on case-mix predictors (age, comorbidities, stage etc). Hospitals were categorized by the observed-to-expected ratio for neoadjuvant chemotherapy use as low, average, or high utilization hospitals. Survival analysis was performed. Results We identified 11,574 patients treated at 55 high volume hospitals. Neoadjuvant chemotherapy was used for 21.6% (n=2494) of patients and use varied widely by hospital, from 5%–55%. High utilization hospitals (n=1910, 10 hospitals) had a median neoadjuvant chemotherapy rate of 39% (range 23–55%), while low utilization hospitals (n=2671, 14 hospitals) had a median rate of 10% (range 5–17%). For all ovarian cancer patients adjusting for clinical and socio-demographic factors, treatment at a hospital with average or high neoadjuvant chemotherapy utilization was associated with a decreased rate of death compared to treatment at a low utilization hospital (HR 0.90 95%CI 0.83–0.97 and HR 0.85 95%CI 0.75–0.95). Conclusions Wide variation exists in the utilization of neoadjuvant chemotherapy to treat stage IIIC and IV epithelial ovarian cancer even among high volume hospitals. Patients treated at hospitals with low rates of neoadjuvant chemotherapy utilization experience decreased survival. PMID:28366545

  12. Multi-stage learning for robust lung segmentation in challenging CT volumes.

    PubMed

    Sofka, Michal; Wetzl, Jens; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin

    2011-01-01

    Simple algorithms for segmenting healthy lung parenchyma in CT are unable to deal with high density tissue common in pulmonary diseases. To overcome this problem, we propose a multi-stage learning-based approach that combines anatomical information to predict an initialization of a statistical shape model of the lungs. The initialization first detects the carina of the trachea, and uses this to detect a set of automatically selected stable landmarks on regions near the lung (e.g., ribs, spine). These landmarks are used to align the shape model, which is then refined through boundary detection to obtain fine-grained segmentation. Robustness is obtained through hierarchical use of discriminative classifiers that are trained on a range of manually annotated data of diseased and healthy lungs. We demonstrate fast detection (35s per volume on average) and segmentation of 2 mm accuracy on challenging data.

  13. Non-Invasive Electromagnetic Skin Patch Sensor to Measure Intracranial Fluid–Volume Shifts

    PubMed Central

    Griffith, Jacob; Cluff, Kim; Eckerman, Brandon; Aldrich, Jessica; Becker, Ryan; Moore-Jansen, Peer; Patterson, Jeremy

    2018-01-01

    Elevated intracranial fluid volume can drive intracranial pressure increases, which can potentially result in numerous neurological complications or death. This study’s focus was to develop a passive skin patch sensor for the head that would non-invasively measure cranial fluid volume shifts. The sensor consists of a single baseline component configured into a rectangular planar spiral with a self-resonant frequency response when impinged upon by external radio frequency sweeps. Fluid volume changes (10 mL increments) were detected through cranial bone using the sensor on a dry human skull model. Preliminary human tests utilized two sensors to determine feasibility of detecting fluid volume shifts in the complex environment of the human body. The correlation between fluid volume changes and shifts in the first resonance frequency using the dry human skull was classified as a second order polynomial with R2 = 0.97. During preliminary and secondary human tests, a ≈24 MHz and an average of ≈45.07 MHz shifts in the principal resonant frequency were measured respectively, corresponding to the induced cephalad bio-fluid shifts. This electromagnetic resonant sensor may provide a non-invasive method to monitor shifts in fluid volume and assist with medical scenarios including stroke, cerebral hemorrhage, concussion, or monitoring intracranial pressure. PMID:29596338

  14. Vascular capacitance and cardiac output in pacing-induced canine models of acute and chronic heart failure.

    PubMed

    Ogilvie, R I; Zborowska-Sluis, D

    1995-11-01

    The relationship between stressed and total blood volume, total vascular capacitance, central blood volume, cardiac output (CO), and pulmonary capillary wedge pressure (Ppcw) was investigated in pacing-induced acute and chronic heart failure. Acute heart failure was induced in anesthetized splenectomized dogs by a volume load (20 mL/kg over 10 min) during rapid right ventricular pacing at 250 beats/min (RRVP) for 60 min. Chronic heart failure was induced by continuous RRVP for 2-6 weeks (average 24 +/- 2 days). Total vascular compliance and capacitance were calculated from the mean circulatory filling pressure (Pmcf) during transient circulatory arrest after acetylcholine at three different circulating volumes. Stressed blood volume was calculated as a product of compliance and Pmcf, with the total blood volume measured by a dye dilution. Central blood volume (CBV) and CO were measured by thermodilution. Central (heart and lung) vascular capacitance was estimated from the plot of Ppcw against CBV. Acute volume loading without RRVP increased capacitance and CO, whereas after volume loading with RRVP, capacitance and CO were unaltered from baseline. Chronic RRVP reduced capacitance and CO. All interventions, volume +/- RRVP or chronic RRVP, increased stressed and central blood volumes and Ppcw. Acute or chronic RRVP reduced central vascular capacitance. Cardiac output was increased when stressed and unstressed blood volumes increased proportionately as during volume loading alone. When CO was reduced and Ppcw increased, as during chronic RRVP or acute RRVP plus a volume load, stressed blood volume was increased and unstressed blood volume was decreased. Thus, interventions that reduced CO and increased Ppcw also increased stressed and reduced unstressed blood volume and total vascular capacitance.

  15. Non-universal tracer diffusion in crowded media of non-inert obstacles.

    PubMed

    Ghosh, Surya K; Cherstvy, Andrey G; Metzler, Ralf

    2015-01-21

    We study the diffusion of a tracer particle, which moves in continuum space between a lattice of excluded volume, immobile non-inert obstacles. In particular, we analyse how the strength of the tracer-obstacle interactions and the volume occupancy of the crowders alter the diffusive motion of the tracer. From the details of partitioning of the tracer diffusion modes between trapping states when bound to obstacles and bulk diffusion, we examine the degree of localisation of the tracer in the lattice of crowders. We study the properties of the tracer diffusion in terms of the ensemble and time averaged mean squared displacements, the trapping time distributions, the amplitude variation of the time averaged mean squared displacements, and the non-Gaussianity parameter of the diffusing tracer. We conclude that tracer-obstacle adsorption and binding triggers a transient anomalous diffusion. From a very narrow spread of recorded individual time averaged trajectories we exclude continuous type random walk processes as the underlying physical model of the tracer diffusion in our system. For moderate tracer-crowder attraction the motion is found to be fully ergodic, while at stronger attraction strength a transient disparity between ensemble and time averaged mean squared displacements occurs. We also put our results into perspective with findings from experimental single-particle tracking and simulations of the diffusion of tagged tracers in dense crowded suspensions. Our results have implications for the diffusion, transport, and spreading of chemical components in highly crowded environments inside living cells and other structured liquids.

  16. Towards the Irving-Kirkwood limit of the mechanical stress tensor

    NASA Astrophysics Data System (ADS)

    Smith, E. R.; Heyes, D. M.; Dini, D.

    2017-06-01

    The probability density functions (PDFs) of the local measure of pressure as a function of the sampling volume are computed for a model Lennard-Jones (LJ) fluid using the Method of Planes (MOP) and Volume Averaging (VA) techniques. This builds on the study of Heyes, Dini, and Smith [J. Chem. Phys. 145, 104504 (2016)] which only considered the VA method for larger subvolumes. The focus here is typically on much smaller subvolumes than considered previously, which tend to the Irving-Kirkwood limit where the pressure tensor is defined at a point. The PDFs from the MOP and VA routes are compared for cubic subvolumes, V =ℓ3. Using very high grid-resolution and box-counting analysis, we also show that any measurement of pressure in a molecular system will fail to exactly capture the molecular configuration. This suggests that it is impossible to obtain the pressure in the Irving-Kirkwood limit using the commonly employed grid based averaging techniques. More importantly, below ℓ ≈3 in LJ reduced units, the PDFs depart from Gaussian statistics, and for ℓ =1.0 , a double peaked PDF is observed in the MOP but not VA pressure distributions. This departure from a Gaussian shape means that the average pressure is not the most representative or common value to arise. In addition to contributing to our understanding of local pressure formulas, this work shows a clear lower limit on the validity of simply taking the average value when coarse graining pressure from molecular (and colloidal) systems.

  17. Towards the Irving-Kirkwood limit of the mechanical stress tensor.

    PubMed

    Smith, E R; Heyes, D M; Dini, D

    2017-06-14

    The probability density functions (PDFs) of the local measure of pressure as a function of the sampling volume are computed for a model Lennard-Jones (LJ) fluid using the Method of Planes (MOP) and Volume Averaging (VA) techniques. This builds on the study of Heyes, Dini, and Smith [J. Chem. Phys. 145, 104504 (2016)] which only considered the VA method for larger subvolumes. The focus here is typically on much smaller subvolumes than considered previously, which tend to the Irving-Kirkwood limit where the pressure tensor is defined at a point. The PDFs from the MOP and VA routes are compared for cubic subvolumes, V=ℓ 3 . Using very high grid-resolution and box-counting analysis, we also show that any measurement of pressure in a molecular system will fail to exactly capture the molecular configuration. This suggests that it is impossible to obtain the pressure in the Irving-Kirkwood limit using the commonly employed grid based averaging techniques. More importantly, below ℓ≈3 in LJ reduced units, the PDFs depart from Gaussian statistics, and for ℓ=1.0, a double peaked PDF is observed in the MOP but not VA pressure distributions. This departure from a Gaussian shape means that the average pressure is not the most representative or common value to arise. In addition to contributing to our understanding of local pressure formulas, this work shows a clear lower limit on the validity of simply taking the average value when coarse graining pressure from molecular (and colloidal) systems.

  18. Towards the Irving-Kirkwood limit of the mechanical stress tensor

    PubMed Central

    Heyes, D. M.; Dini, D.

    2017-01-01

    The probability density functions (PDFs) of the local measure of pressure as a function of the sampling volume are computed for a model Lennard-Jones (LJ) fluid using the Method of Planes (MOP) and Volume Averaging (VA) techniques. This builds on the study of Heyes, Dini, and Smith [J. Chem. Phys. 145, 104504 (2016)] which only considered the VA method for larger subvolumes. The focus here is typically on much smaller subvolumes than considered previously, which tend to the Irving-Kirkwood limit where the pressure tensor is defined at a point. The PDFs from the MOP and VA routes are compared for cubic subvolumes, V=ℓ3. Using very high grid-resolution and box-counting analysis, we also show that any measurement of pressure in a molecular system will fail to exactly capture the molecular configuration. This suggests that it is impossible to obtain the pressure in the Irving-Kirkwood limit using the commonly employed grid based averaging techniques. More importantly, below ℓ≈3 in LJ reduced units, the PDFs depart from Gaussian statistics, and for ℓ=1.0, a double peaked PDF is observed in the MOP but not VA pressure distributions. This departure from a Gaussian shape means that the average pressure is not the most representative or common value to arise. In addition to contributing to our understanding of local pressure formulas, this work shows a clear lower limit on the validity of simply taking the average value when coarse graining pressure from molecular (and colloidal) systems. PMID:29166053

  19. Measurements and Modeling of Soot Formation and Radiation in Microgravity Jet Diffusion Flames. Volume 4

    NASA Technical Reports Server (NTRS)

    Ku, Jerry C.; Tong, Li; Greenberg, Paul S.

    1996-01-01

    This is a computational and experimental study for soot formation and radiative heat transfer in jet diffusion flames under normal gravity (1-g) and microgravity (0-g) conditions. Instantaneous soot volume fraction maps are measured using a full-field imaging absorption technique developed by the authors. A compact, self-contained drop rig is used for microgravity experiments in the 2.2-second drop tower facility at NASA Lewis Research Center. On modeling, we have coupled flame structure and soot formation models with detailed radiation transfer calculations. Favre-averaged boundary layer equations with a k-e-g turbulence model are used to predict the flow field, and a conserved scalar approach with an assumed Beta-pdf are used to predict gaseous species mole fraction. Scalar transport equations are used to describe soot volume fraction and number density distributions, with formation and oxidation terms modeled by one-step rate equations and thermophoretic effects included. An energy equation is included to couple flame structure and radiation analyses through iterations, neglecting turbulence-radiation interactions. The YIX solution for a finite cylindrical enclosure is used for radiative heat transfer calculations. The spectral absorption coefficient for soot aggregates is calculated from the Rayleigh solution using complex refractive index data from a Drude- Lorentz model. The exponential-wide-band model is used to calculate the spectral absorption coefficient for H20 and C02. It is shown that when compared to results from true spectral integration, the Rosseland mean absorption coefficient can provide reasonably accurate predictions for the type of flames studied. The soot formation model proposed by Moss, Syed, and Stewart seems to produce better fits to experimental data and more physically sound than the simpler model by Khan et al. Predicted soot volume fraction and temperature results agree well with published data for a normal gravity co-flow laminar flames and turbulent jet flames. Predicted soot volume fraction results also agree with our data for 1-g and 0-g laminar jet names as well as 1-g turbulent jet flames.

  20. Forecasting Daily Volume and Acuity of Patients in the Emergency Department.

    PubMed

    Calegari, Rafael; Fogliatto, Flavio S; Lucini, Filipe R; Neyeloff, Jeruza; Kuchenbecker, Ricardo S; Schaan, Beatriz D

    2016-01-01

    This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification.

  1. Forecasting Daily Volume and Acuity of Patients in the Emergency Department

    PubMed Central

    Fogliatto, Flavio S.; Neyeloff, Jeruza; Kuchenbecker, Ricardo S.; Schaan, Beatriz D.

    2016-01-01

    This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification. PMID:27725842

  2. Recent Changes in Glacial Area and Volume on Tuanjiefeng Peak Region of Qilian Mountains, China

    PubMed Central

    Xu, Junli; Liu, Shiyin; Zhang, Shiqiang; Guo, Wanqin; Wang, Jian

    2013-01-01

    Glaciers' runoff in the Qilian Mountains serves as a critical water resource in the northern sections of the Gansu province, the northeastern sections of the Qinghai province, and the northeastern fringe of the Tibetan Plateau. Changes in the glacial area and volume around the highest peak of the Qilian Mountains, i.e., Tuanjiefeng Peak, were estimated using multi-temporal remote-sensing images and digital elevation models, and all possible sources of uncertainty were considered in detail. The total glacier area decreased by 16.1±6.34 km2 (9.9±3.9%) during 1966 to 2010. The average annual glacier shrinkage was −0.15% a−1 from 1966 to 1995, −0.61% a−1 from 1995 to 2000, −0.20% a−1 from 2000 to 2006, and −0.45% a−1 from 2006 to 2010. A comparison of glacier surface elevations using digital elevation models derived from topographic maps in 1966 and from the Shuttle Radar Topography Mission in 1999 suggests that 65% of the grid cells has decreased, thereby indicating that the glacier thickness has declined. The average change in glacier thickness was −7.3±1.5 m (−0.21±0.04 m·a−1) from 1966 to 1999. Glaciers with northeastern aspects thinned by 8.3±1.4 m from 1966 to 1999, i.e., almost twice as much as those with southwestern aspects (4.3±1.3 m). The ice volume decreased by 11.72±2.38×108 m3 from 1966 to 1999, which was about 17.4% more than the value calculated from the statistical relationship between glacier area and volume. The relationship between glacier area change and elevation zone indicates that glacier change is not only dominated by climate change but also affected by glacier dynamics, which are related to local topography. The varied response of a single glacier to climate change indicates that the glacier area change scheme used in some models must be improved. PMID:24015174

  3. Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu

    1995-01-01

    As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.

  4. Gravitational potential wells and the cosmic bulk flow

    NASA Astrophysics Data System (ADS)

    Wang, Yuyu; Kumar, Abhinav; Feldman, Hume; Watkins, Richard

    2016-03-01

    The bulk flow is a volume average of the peculiar velocities and a useful probe of the mass distribution on large scales. The gravitational instability model views the bulk flow as a potential flow that obeys a Maxwellian Distribution. We use two N-body simulations, the LasDamas Carmen and the Horizon Run, to calculate the bulk flows of various sized volumes in the simulation boxes. Once we have the bulk flow velocities as a function of scale, we investigate the mass and gravitational potential distribution around the volume. We found that matter densities can be asymmetrical and difficult to detect in real surveys, however, the gravitational potential and its gradient may provide better tools to investigate the underlying matter distribution. This study shows that bulk flows are indeed potential flows and thus provides information on the flow sources. We also show that bulk flow magnitudes follow a Maxwellian distribution on scales > 10h-1 Mpc.

  5. 30 CFR 260.122 - How long will a royalty suspension volume be effective for a lease issued in a sale held after...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Notwithstanding any royalty suspension volume under this subpart, you must pay royalty at the lease stipulated... average of the daily closing price on the New York Mercantile Exchange (NYMEX) for light sweet crude oil... produced for any period stipulated in the lease during which the arithmetic average of the daily closing...

  6. Growth response to fertilizer in a young aspen-birch stand

    Treesearch

    Miroslaw M. Czapowskyj; Lawrence O. Safford

    1978-01-01

    A thinned aspen-birch-red maple stand was fertilized with N, P, and N plus P, both with and without lime (L). Overall, treatments with N increased height growth by an average of 79 percent, and volume growth by 69 percent, over treatments without N. Lime tended to increase both average height and volume growth over each corresponding treatment without lime. The amount...

  7. Use of molecular modeling to determine the interaction and competition of gases within coal for carbon dioxide sequestration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey D. Evanseck; Jeffry D. Madura; Jonathan P. Mathews

    2006-04-21

    Molecular modeling was employed to both visualize and probe our understanding of carbon dioxide sequestration within a bituminous coal. A large-scale (>20,000 atoms) 3D molecular representation of Pocahontas No. 3 coal was generated. This model was constructed based on a the review data of Stock and Muntean, oxidation and decarboxylation data for aromatic clustersize frequency of Stock and Obeng, and the combination of Laser Desorption Mass Spectrometry data with HRTEM, enabled the inclusion of a molecular weight distribution. The model contains 21,931 atoms, with a molecular mass of 174,873 amu, and an average molecular weight of 714 amu, with 201more » structural components. The structure was evaluated based on several characteristics to ensure a reasonable constitution (chemical and physical representation). The helium density of Pocahontas No. 3 coal is 1.34 g/cm{sup 3} (dmmf) and the model was 1.27 g/cm{sup 3}. The structure is microporous, with a pore volume comprising 34% of the volume as expected for a coal of this rank. The representation was used to visualize CO{sub 2}, and CH{sub 4} capacity, and the role of moisture in swelling and CO{sub 2}, and CH{sub 4} capacity reduction. Inclusion of 0.68% moisture by mass (ash-free) enabled the model to swell by 1.2% (volume). Inclusion of CO{sub 2} enabled volumetric swelling of 4%.« less

  8. SU-F-P-27: The Study of Actual DVH for Target and OARs During the Radiotherapy of Non-Small Cell Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, C; Yin, Y

    2016-06-15

    Purpose: To analyze the changes of the volume and dosimetry of target and organs at risk (OARs) by comparing the daily CBCT images and planning CT images of the patients with Non-Small Cell Lung Cancer (NSCLC) and analyze the difference between planned dose and accumulated dose. Methods: This study retrospectively analyzed eight cases of non-small cell lung cancer patients who accepted CRT or IMRT treatment and KV-CBCT. For each patient, the prescription dose was 60Gy and the fraction dose was 2Gy. Deform the daily CBCT images to planning CT images by the mapping of registration to compare the planning dosemore » with cumulative dose of targets and organs at risk in RayStation. Results: The average volume of GTV of 8 patients with CBCT was 88.26% of the original volume. The average plan dose of GTV was 64.49±2.40Gy. The accumulated dose of GTV was 60.13±2.70Gy (P≤0.05). The average volume of PTV to reach the prescription dose was 95.59% for original plan and 81.47% for accumulated plan (P≤0.05). The volume changes of the left and right lung of the original volume was 88.95% and 80.32%, respectively. The average dose of the left and right lung of original plan was 9.31±1.75Gy and 4.33±1.10Gy, respectively(P≥0.05). The average accumulated dose was 9.63±1.96Gy and 4.63±1.36Gy, respectively(P≥0.05). The average plan dose and accumulated dose of heart was 6.88±1.70Gy and 6.38±0.91Gy, respectively (P≥0.05). The average plan maximum dose and accumulated dose for spinal cord was 24.62±5.91Gy and 26.00±5.14Gy, respectively (P≥0.05). Conclusion: The changes of target anatomical structure with NSCLC make difference between the planned dose and cumulative dose. With the dose deformation method, the dose gap can be found between planning dose and delivery dose.« less

  9. Logging residue in Washington, Oregon, California: volume and characteristics.

    Treesearch

    James O. Howard

    1973-01-01

    This report makes available data on the volume and characteristics of logging residue resulting from 1969 logging operations in Oregon, Washington, and California. The results indicate highest volumes of logging residue are found in the Douglas-fir region of western Oregon and western Washington. Average gross volume of residue in this region ranged...

  10. Pediatric chest and abdominopelvic CT: organ dose estimation based on 42 patient models.

    PubMed

    Tian, Xiaoyu; Li, Xiang; Segars, W Paul; Paulson, Erik K; Frush, Donald P; Samei, Ehsan

    2014-02-01

    To estimate organ dose from pediatric chest and abdominopelvic computed tomography (CT) examinations and evaluate the dependency of organ dose coefficients on patient size and CT scanner models. The institutional review board approved this HIPAA-compliant study and did not require informed patient consent. A validated Monte Carlo program was used to perform simulations in 42 pediatric patient models (age range, 0-16 years; weight range, 2-80 kg; 24 boys, 18 girls). Multidetector CT scanners were modeled on those from two commercial manufacturers (LightSpeed VCT, GE Healthcare, Waukesha, Wis; SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). Organ doses were estimated for each patient model for routine chest and abdominopelvic examinations and were normalized by volume CT dose index (CTDI(vol)). The relationships between CTDI(vol)-normalized organ dose coefficients and average patient diameters were evaluated across scanner models. For organs within the image coverage, CTDI(vol)-normalized organ dose coefficients largely showed a strong exponential relationship with the average patient diameter (R(2) > 0.9). The average percentage differences between the two scanner models were generally within 10%. For distributed organs and organs on the periphery of or outside the image coverage, the differences were generally larger (average, 3%-32%) mainly because of the effect of overranging. It is feasible to estimate patient-specific organ dose for a given examination with the knowledge of patient size and the CTDI(vol). These CTDI(vol)-normalized organ dose coefficients enable one to readily estimate patient-specific organ dose for pediatric patients in clinical settings. This dose information, and, as appropriate, attendant risk estimations, can provide more substantive information for the individual patient for both clinical and research applications and can yield more expansive information on dose profiles across patient populations within a practice. © RSNA, 2013.

  11. Thermal Pollution Mathematical Model. Volume 6; Verification of Three-Dimensional Free-Surface Model at Anclote Anchorage; [environment impact of thermal discharges from power plants

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Tuann, S. Y.; Lee, C. R.

    1980-01-01

    The free-surface model presented is for tidal estuaries and coastal regions where ambient tidal forces play an important role in the dispersal of heated water. The model is time dependent, three dimensional, and can handle irregular bottom topography. The vertical stretching coordinate is adopted for better treatment of kinematic condition at the water surface. The results include surface elevation, velocity, and temperature. The model was verified at the Anclote Anchorage site of Florida Power Company. Two data bases at four tidal stages for winter and summer conditions were used to verify the model. Differences between measured and predicted temperatures are on an average of less than 1 C.

  12. Network of listed companies based on common shareholders and the prediction of market volatility

    NASA Astrophysics Data System (ADS)

    Li, Jie; Ren, Da; Feng, Xu; Zhang, Yongjie

    2016-11-01

    In this paper, we build a network of listed companies in the Chinese stock market based on common shareholding data from 2003 to 2013. We analyze the evolution of topological characteristics of the network (e.g., average degree, diameter, average path length and clustering coefficient) with respect to the time sequence. Additionally, we consider the economic implications of topological characteristic changes on market volatility and use them to make future predictions. Our study finds that the network diameter significantly predicts volatility. After adding control variables used in traditional financial studies (volume, turnover and previous volatility), network topology still significantly influences volatility and improves the predictive ability of the model.

  13. 40 CFR 60.393 - Performance test and compliance provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... volume of applied solids emitted after the control device, by the following equation: N=G[1-FE] (A... month by the following equation: EC16NO91.031 (v) If the volume weighted average mass of VOC per volume...

  14. 3D morphometry using automated aortic segmentation in native MR angiography: an alternative to contrast enhanced MRA?

    PubMed

    Müller-Eschner, Matthias; Müller, Tobias; Biesdorf, Andreas; Wörz, Stefan; Rengier, Fabian; Böckler, Dittmar; Kauczor, Hans-Ulrich; Rohr, Karl; von Tengg-Kobligk, Hendrik

    2014-04-01

    Native-MR angiography (N-MRA) is considered an imaging alternative to contrast enhanced MR angiography (CE-MRA) for patients with renal insufficiency. Lower intraluminal contrast in N-MRA often leads to failure of the segmentation process in commercial algorithms. This study introduces an in-house 3D model-based segmentation approach used to compare both sequences by automatic 3D lumen segmentation, allowing for evaluation of differences of aortic lumen diameters as well as differences in length comparing both acquisition techniques at every possible location. Sixteen healthy volunteers underwent 1.5-T-MR Angiography (MRA). For each volunteer, two different MR sequences were performed, CE-MRA: gradient echo Turbo FLASH sequence and N-MRA: respiratory-and-cardiac-gated, T2-weighted 3D SSFP. Datasets were segmented using a 3D model-based ellipse-fitting approach with a single seed point placed manually above the celiac trunk. The segmented volumes were manually cropped from left subclavian artery to celiac trunk to avoid error due to side branches. Diameters, volumes and centerline length were computed for intraindividual comparison. For statistical analysis the Wilcoxon-Signed-Ranked-Test was used. Average centerline length obtained based on N-MRA was 239.0±23.4 mm compared to 238.6±23.5 mm for CE-MRA without significant difference (P=0.877). Average maximum diameter obtained based on N-MRA was 25.7±3.3 mm compared to 24.1±3.2 mm for CE-MRA (P<0.001). In agreement with the difference in diameters, volumes obtained based on N-MRA (100.1±35.4 cm(3)) were consistently and significantly larger compared to CE-MRA (89.2±30.0 cm(3)) (P<0.001). 3D morphometry shows highly similar centerline lengths for N-MRA and CE-MRA, but systematically higher diameters and volumes for N-MRA.

  15. 3D morphometry using automated aortic segmentation in native MR angiography: an alternative to contrast enhanced MRA?

    PubMed Central

    Müller-Eschner, Matthias; Müller, Tobias; Biesdorf, Andreas; Wörz, Stefan; Rengier, Fabian; Böckler, Dittmar; Kauczor, Hans-Ulrich; Rohr, Karl

    2014-01-01

    Introduction Native-MR angiography (N-MRA) is considered an imaging alternative to contrast enhanced MR angiography (CE-MRA) for patients with renal insufficiency. Lower intraluminal contrast in N-MRA often leads to failure of the segmentation process in commercial algorithms. This study introduces an in-house 3D model-based segmentation approach used to compare both sequences by automatic 3D lumen segmentation, allowing for evaluation of differences of aortic lumen diameters as well as differences in length comparing both acquisition techniques at every possible location. Methods and materials Sixteen healthy volunteers underwent 1.5-T-MR Angiography (MRA). For each volunteer, two different MR sequences were performed, CE-MRA: gradient echo Turbo FLASH sequence and N-MRA: respiratory-and-cardiac-gated, T2-weighted 3D SSFP. Datasets were segmented using a 3D model-based ellipse-fitting approach with a single seed point placed manually above the celiac trunk. The segmented volumes were manually cropped from left subclavian artery to celiac trunk to avoid error due to side branches. Diameters, volumes and centerline length were computed for intraindividual comparison. For statistical analysis the Wilcoxon-Signed-Ranked-Test was used. Results Average centerline length obtained based on N-MRA was 239.0±23.4 mm compared to 238.6±23.5 mm for CE-MRA without significant difference (P=0.877). Average maximum diameter obtained based on N-MRA was 25.7±3.3 mm compared to 24.1±3.2 mm for CE-MRA (P<0.001). In agreement with the difference in diameters, volumes obtained based on N-MRA (100.1±35.4 cm3) were consistently and significantly larger compared to CE-MRA (89.2±30.0 cm3) (P<0.001). Conclusions 3D morphometry shows highly similar centerline lengths for N-MRA and CE-MRA, but systematically higher diameters and volumes for N-MRA. PMID:24834406

  16. Dosimetric comparison of intensity modulated radiotherapy and three-dimensional conformal radiotherapy in patients with gynecologic malignancies: a systematic review and meta-analysis

    PubMed Central

    2012-01-01

    Background To quantitatively evaluate the safety and related-toxicities of intensity modulated radiotherapy (IMRT) dose–volume histograms (DVHs), as compared to the conventional three-dimensional conformal radiotherapy (3D-CRT), in gynecologic malignancy patients by systematic review of the related publications and meta-analysis. Methods Relevant articles were retrieved from the PubMed, Embase, and Cochrane Library databases up to August 2011. Two independent reviewers assessed the included studies and extracted data. Pooled average percent irradiated volumes of adjacent non-cancerous tissues were calculated and compared between IMRT and 3D-CRT for a range of common radiation doses (5-45Gy). Results In total, 13 articles comprised of 222 IMRT-treated and 233 3D-CRT-treated patients were included. For rectum receiving doses ≥30 Gy, the IMRT pooled average irradiated volumes were less than those from 3D-CRT by 26.40% (30 Gy, p = 0.004), 27.00% (35 Gy, p = 0.040), 37.30% (40 Gy, p = 0.006), and 39.50% (45 Gy, p = 0.002). Reduction in irradiated small bowel was also observed for IMRT-delivered 40 Gy and 45 Gy (by 17.80% (p = 0.043) and 17.30% (p = 0.012), respectively), as compared with 3D-CRT. However, there were no significant differences in the IMRT and 3D-CRT pooled average percent volumes of irradiated small bowel or rectum from lower doses, or in the bladder or bone marrow from any of the doses. IMRT-treated patients did not experience more severe acute or chronic toxicities than 3D-CRT-treated patients. Conclusions IMRT-delivered high radiation dose produced significantly less average percent volumes of irradiated rectum and small bowel than 3D-CRT, but did not differentially affect the average percent volumes in the bladder and bone marrow. PMID:23176540

  17. Observation of Intravascular Changes of Superabsorbent Polymer Microsphere (SAP-MS) with Monochromatic X-Ray Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanimoto, Daigo, E-mail: daigoro@med.kawasaki-m.ac.jp; Ito, Katsuyoshi; Yamamoto, Akira

    2010-10-15

    This study was designed to evaluate the intravascular transformation behavior of superabsorbent polymer microsphere (SAP-MS) in vivo macroscopically by using monochromatic X-ray imaging and to quantitatively compare the expansion rate of SAP-MS among different kinds of mixtures. Fifteen rabbits were used for our study and transcatheter arterial embolization (TAE) was performed for their auricular arteries using monochromatic X-ray imaging. We used three kinds of SAP-MS (particle diameter 100-150 {mu}m) mixture as embolic spherical particles: SAP-MS(H) absorbed with sodium meglumine ioxaglate (Hexabrix 320), SAP-MS(V) absorbed with isosmolar contrast medium (Visipaque 270), and SAP-MS(S) absorbed with 0.9% sodium saline. The initial volumemore » of SAP-MS particles just after TAE and its final volume 10 minutes after TAE in the vessel were measured to calculate the expansion rate (ER) (n = 30). Intravascular behavior of SAP-MS particles was clearly observed in real time at monochromatic X-ray imaging. Averaged initial volumes of SAP-MS (H) (1.24 x 10{sup 7} {mu}m{sup 3}) were significantly smaller (p < 0.001) than those of SAP-MS (V) (5.99 x 10{sup 7} {mu}m{sup 3}) and SAP-MS (S) (5.85 x 10{sup 7} {mu}m{sup 3}). Averaged final volumes of SAP-MS (H) were significantly larger than averaged initial volumes (4.41 x 10{sup 7} {mu}m{sup 3} vs. 1.24 x 10{sup 7} {mu}m{sup 3}; p < 0.0001, ER = 3.55). There were no significant difference between averaged final volumes and averaged initial volumes of SAP-MS (V) and SAP-MS (S). SAP-MS (H), which first travels distally, reaches to small arteries, and then expands to adapt to the vessel lumen, is an effective particle as an embolic agent, causing effective embolization.« less

  18. Decreasing trends of suspended particulate matter and PM2.5 concentrations in Tokyo, 1990-2010.

    PubMed

    Hara, Kunio; Homma, Junichi; Tamura, Kenji; Inoue, Mariko; Karita, Kanae; Yano, Eiji

    2013-06-01

    In Tokyo, the annual average suspended particulate matter (SPM) and PM2.5 concentrations have decreased in the past two decades. The present study quantitatively evaluated these decreasing trends using data from air-pollution monitoring stations. Annual SPM and PM2.5 levels at 83 monitoring stations and hourly SPM and PM2.5 levels at four monitoring stations in Tokyo, operated by the Tokyo Metropolitan Government, were used for analysis, together with levels of co-pollutants and meteorological conditions. Traffic volume in Tokyo was calculated from the total traveling distance of vehicles as reported by the Ministry of Land, Infrastructure, Transport, and Tourism. High positive correlations between SPM levels and nitrogen oxide levels, sulfur dioxide levels, and traffic volume were determined. The annual average SPM concentration declined by 62.6%from 59.4 microg/m3 in 1994 to 22.2 microg/m3 in 2010, and PM2.5 concentration also declined by 49.8% from 29.3 microg/m3 in 2001 to 14.7 microg/m3 in 2010. Likewise, the frequencies of hourly average SPM and PM2.5 concentrations exceeding the daily guideline values have significantly decreased since 2001 and the hourly average SPM or PM2.5 concentrations per traffic volume for each time period have also significantly decreased since 2001. However SPM and PM2.5 concentrations increased at some monitoring stations between 2004 and 2006 and from 2009 despite strengthened environmental regulations and improvements in vehicle engine performance. The annual average SPM and PM2.5 concentrations were positively correlated with traffic volumes and in particular with the volume of diesel trucks. These results suggest that the decreasing levels of SPM and PM2.5 in Tokyo may be attributable to decreased traffic volumes, along with the effects of stricter governmental regulation and improvements to vehicle engine performance, including the fitting of devices for exhaust emission reduction.

  19. Sequential dome-collapse nuées ardentes analyzed from broadband seismic data, Merapi Volcano, Indonesia

    USGS Publications Warehouse

    Brodscholl, A.; Kirbani, S.B.; Voight, B.

    2000-01-01

    The broadband data were evaluated using the assumption that avalanches with the same source areas and descent paths exhibit a linear relation between source volume and recorded seismic-amplitude envelope area. A result of the analysis is the determination of the volume of selected individual events. From the field surveys, the total volume of the collapsed dome lava is 2.6 Mm3. Discounting the volumetric influence of rockfalls, the average size of the 44 nuées ardentes is therefore about 60,000 m3. The largest collapse event at 10:54 is estimated to involve 260,000 m3, based on an analysis of the seismicity. The remaining 23 phase I events averaged 60,000 m3, with the total volume of all phase I events accounting for 63% of the unstable dome. The 20 phase II events comprised 37% of the total volume and averaged 47,000 m3. The methods described here can be put to practical use in real-time monitoring situations. Broadband data were essential in this study primarily because of the wide dynamic range.

  20. A Mass Diffusion Model for Dry Snow Utilizing a Fabric Tensor to Characterize Anisotropy

    NASA Astrophysics Data System (ADS)

    Shertzer, Richard H.; Adams, Edward E.

    2018-03-01

    A homogenization algorithm for randomly distributed microstructures is applied to develop a mass diffusion model for dry snow. Homogenization is a multiscale approach linking constituent behavior at the microscopic level—among ice and air—to the macroscopic material—snow. Principles of continuum mechanics at the microscopic scale describe water vapor diffusion across an ice grain's surface to the air-filled pore space. Volume averaging and a localization assumption scale up and down, respectively, between microscopic and macroscopic scales. The model yields a mass diffusivity expression at the macroscopic scale that is, in general, a second-order tensor parameterized by both bulk and microstructural variables. The model predicts a mass diffusivity of water vapor through snow that is less than that through air. Mass diffusivity is expected to decrease linearly with ice volume fraction. Potential anisotropy in snow's mass diffusivity is captured due to the tensor representation. The tensor is built from directional data assigned to specific, idealized microstructural features. Such anisotropy has been observed in the field and laboratories in snow morphologies of interest such as weak layers of depth hoar and near-surface facets.

  1. Model for compressible turbulence in hypersonic wall boundary and high-speed mixing layers

    NASA Astrophysics Data System (ADS)

    Bowersox, Rodney D. W.; Schetz, Joseph A.

    1994-07-01

    The most common approach to Navier-Stokes predictions of turbulent flows is based on either the classical Reynolds-or Favre-averaged Navier-Stokes equations or some combination. The main goal of the current work was to numerically assess the effects of the compressible turbulence terms that were experimentaly found to be important. The compressible apparent mass mixing length extension (CAMMLE) model, which was based on measured experimental data, was found to produce accurate predictions of the measured compressible turbulence data for both the wall bounded and free mixing layer. Hence, that model was incorporated into a finite volume Navier-Stokes code.

  2. Modeling the mechanical behavior of ceramic and heterophase structures manufactured using selective laser sintering and spark plasma sintering

    NASA Astrophysics Data System (ADS)

    Skripnyak, Vladimir A.; Skripnyak, Evgeniya G.; Skripnyak, Vladimir V.; Vaganova, Irina K.

    A model for predicting mechanical properties of ultra-high temperature ceramics and composites manufactured by selective laser sintering (SLS) and spark plasma sintering (SPS) under shock loading is presented. The model takes into account the porous structure, the specific volume and average sizes of phases, and the temperature of sintering. Residual stresses in ceramic composites reinforced with particles of refractory borides, carbides and nitrides after SLS or SPS were calculated. It is shown that the spall strength of diboride-zirconium matrix composites can be increased by the decreasing of porosity and the introduction of inclusions of specially selected refractory strengthening phases.

  3. Controls on Martian Hydrothermal Systems: Application to Valley Network and Magnetic Anomaly Formation

    NASA Technical Reports Server (NTRS)

    Harrison, Keith P.; Grimm, Robert E.

    2002-01-01

    Models of hydrothermal groundwater circulation can quantify limits to the role of hydrothermal activity in Martian crustal processes. We present here the results of numerical simulations of convection in a porous medium due to the presence of a hot intruded magma chamber. The parameter space includes magma chamber depth, volume, aspect ratio, and host rock permeability and porosity. A primary goal of the models is the computation of surface discharge. Discharge increases approximately linearly with chamber volume, decreases weakly with depth (at low geothermal gradients), and is maximized for equant-shaped chambers. Discharge increases linearly with permeability until limited by the energy available from the intrusion. Changes in the average porosity are balanced by changes in flow velocity and therefore have little effect. Water/rock ratios of approximately 0.1, obtained by other workers from models based on the mineralogy of the Shergotty meteorite, imply minimum permeabilities of 10(exp -16) sq m2 during hydrothermal alteration. If substantial vapor volumes are required for soil alteration, the permeability must exceed 10(exp -15) sq m. The principal application of our model is to test the viability of hydrothermal circulation as the primary process responsible for the broad spatial correlation of Martian valley networks with magnetic anomalies. For host rock permeabilities as low as 10(exp -17) sq m and intrusion volumes as low as 50 cu km, the total discharge due to intrusions building that part of the southern highlands crust associated with magnetic anomalies spans a comparable range as the inferred discharge from the overlying valley networks.

  4. Comparison of Selected EIA-782 Data With Other Data Sources

    EIA Publications

    2012-01-01

    This article compares annual average prices reported from the EIA-782 survey series for residential No. 2 distillate, on-highway diesel fuel, retail regular motor gasoline, refiner No. 2 fuel oil for resale, refiner No. 2 diesel fuel for resale, refiner regular motor gasoline for resale, and refiner kerosene-type jet fuel for resale with annual average prices reported by other sources. In terms of volume, it compares EIA-782C Prime Supplier annual volumes for motor gasoline (all grades), distillate fuel oil, kerosene-type jet fuel and residual fuel oil with annual volumes from other sources.

  5. Direct Simulation of Extinction in a Slab of Spherical Particles

    NASA Technical Reports Server (NTRS)

    Mackowski, D.W.; Mishchenko, Michael I.

    2013-01-01

    The exact multiple sphere superposition method is used to calculate the coherent and incoherent contributions to the ensemble-averaged electric field amplitude and Poynting vector in systems of randomly positioned nonabsorbing spherical particles. The target systems consist of cylindrical volumes, with radius several times larger than length, containing spheres with positional configurations generated by a Monte Carlo sampling method. Spatially dependent values for coherent electric field amplitude, coherent energy flux, and diffuse energy flux, are calculated by averaging of exact local field and flux values over multiple configurations and over spatially independent directions for fixed target geometry, sphere properties, and sphere volume fraction. Our results reveal exponential attenuation of the coherent field and the coherent energy flux inside the particulate layer and thereby further corroborate the general methodology of the microphysical radiative transfer theory. An effective medium model based on plane wave transmission and reflection by a plane layer is used to model the dependence of the coherent electric field on particle packing density. The effective attenuation coefficient of the random medium, computed from the direct simulations, is found to agree closely with effective medium theories and with measurements. In addition, the simulation results reveal the presence of a counter-propagating component to the coherent field, which arises due to the internal reflection of the main coherent field component by the target boundary. The characteristics of the diffuse flux are compared to, and found to be consistent with, a model based on the diffusion approximation of the radiative transfer theory.

  6. Efficient cold outflows driven by cosmic rays in high-redshift galaxies and their global effects on the IGM

    NASA Astrophysics Data System (ADS)

    Samui, Saumyadip; Subramanian, Kandaswamy; Srianand, Raghunathan

    2018-05-01

    We present semi-analytical models of galactic outflows in high-redshift galaxies driven by both hot thermal gas and non-thermal cosmic rays. Thermal pressure alone may not sustain a large-scale outflow in low-mass galaxies (i.e. M ˜ 108 M⊙), in the presence of supernovae feedback with large mass loading. We show that inclusion of cosmic ray pressure allows outflow solutions even in these galaxies. In massive galaxies for the same energy efficiency, cosmic ray-driven winds can propagate to larger distances compared to pure thermally driven winds. On an average gas in the cosmic ray-driven winds has a lower temperature which could aid detecting it through absorption lines in the spectra of background sources. Using our constrained semi-analytical models of galaxy formation (that explains the observed ultraviolet luminosity functions of galaxies), we study the influence of cosmic ray-driven winds on the properties of the intergalactic medium (IGM) at different redshifts. In particular, we study the volume filling factor, average metallicity, cosmic ray and magnetic field energy densities for models invoking atomic cooled and molecular cooled haloes. We show that the cosmic rays in the IGM could have enough energy that can be transferred to the thermal gas in presence of magnetic fields to influence the thermal history of the IGM. The significant volume filling and resulting strength of IGM magnetic fields can also account for recent γ-ray observations of blazars.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabin, Charles; Plevka, Pavel, E-mail: pavel.plevka@ceitec.muni.cz

    Molecular replacement and noncrystallographic symmetry averaging were used to detwin a data set affected by perfect hemihedral twinning. The noncrystallographic symmetry averaging of the electron-density map corrected errors in the detwinning introduced by the differences between the molecular-replacement model and the crystallized structure. Hemihedral twinning is a crystal-growth anomaly in which a specimen is composed of two crystal domains that coincide with each other in three dimensions. However, the orientations of the crystal lattices in the two domains differ in a specific way. In diffraction data collected from hemihedrally twinned crystals, each observed intensity contains contributions from both of themore » domains. With perfect hemihedral twinning, the two domains have the same volumes and the observed intensities do not contain sufficient information to detwin the data. Here, the use of molecular replacement and of noncrystallographic symmetry (NCS) averaging to detwin a 2.1 Å resolution data set for Aichi virus 1 affected by perfect hemihedral twinning is described. The NCS averaging enabled the correction of errors in the detwinning introduced by the differences between the molecular-replacement model and the crystallized structure. The procedure permitted the structure to be determined from a molecular-replacement model that had 16% sequence identity and a 1.6 Å r.m.s.d. for C{sup α} atoms in comparison to the crystallized structure. The same approach could be used to solve other data sets affected by perfect hemihedral twinning from crystals with NCS.« less

  8. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven

    2015-02-15

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy basedmore » on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets were separately employed to test the effectiveness of the proposed contouring error detection strategy. Results: An evaluation tool was implemented to illustrate how the proposed strategy automatically detects the radiation therapy contouring errors for a given patient and provides 3D graphical visualization of error detection results as well. The contouring error detection results were achieved with an average sensitivity of 0.954/0.906 and an average specificity of 0.901/0.909 on the centroid/volume related contouring errors of all the tested samples. As for the detection results on structural shape related contouring errors, an average sensitivity of 0.816 and an average specificity of 0.94 on all the tested samples were obtained. The promising results indicated the feasibility of the proposed strategy for the detection of contouring errors with low false detection rate. Conclusions: The proposed strategy can reliably identify contouring errors based upon inter- and intrastructural constraints derived from clinically approved contours. It holds great potential for improving the radiation therapy workflow. ROC and box plot analyses allow for analytically tuning of the system parameters to satisfy clinical requirements. Future work will focus on the improvement of strategy reliability by utilizing more training sets and additional geometric attribute constraints.« less

  9. Quantification of errors induced by temporal resolution on Lagrangian particles in an eddy-resolving model

    NASA Astrophysics Data System (ADS)

    Qin, Xuerong; van Sebille, Erik; Sen Gupta, Alexander

    2014-04-01

    Lagrangian particle tracking within ocean models is an important tool for the examination of ocean circulation, ventilation timescales and connectivity and is increasingly being used to understand ocean biogeochemistry. Lagrangian trajectories are obtained by advecting particles within velocity fields derived from hydrodynamic ocean models. For studies of ocean flows on scales ranging from mesoscale up to basin scales, the temporal resolution of the velocity fields should ideally not be more than a few days to capture the high frequency variability that is inherent in mesoscale features. However, in reality, the model output is often archived at much lower temporal resolutions. Here, we quantify the differences in the Lagrangian particle trajectories embedded in velocity fields of varying temporal resolution. Particles are advected from 3-day to 30-day averaged fields in a high-resolution global ocean circulation model. We also investigate whether adding lateral diffusion to the particle movement can compensate for the reduced temporal resolution. Trajectory errors reveal the expected degradation of accuracy in the trajectory positions when decreasing the temporal resolution of the velocity field. Divergence timescales associated with averaging velocity fields up to 30 days are faster than the intrinsic dispersion of the velocity fields but slower than the dispersion caused by the interannual variability of the velocity fields. In experiments focusing on the connectivity along major currents, including western boundary currents, the volume transport carried between two strategically placed sections tends to increase with increased temporal averaging. Simultaneously, the average travel times tend to decrease. Based on these two bulk measured diagnostics, Lagrangian experiments that use temporal averaging of up to nine days show no significant degradation in the flow characteristics for a set of six currents investigated in more detail. The addition of random-walk-style diffusion does not mitigate the errors introduced by temporal averaging for large-scale open ocean Lagrangian simulations.

  10. Investigation of unsteadiness in Shock-particle cloud interaction: Fully resolved two-dimensional simulation and one-dimensional modeling

    NASA Astrophysics Data System (ADS)

    Hosseinzadeh-Nik, Zahra; Regele, Jonathan D.

    2015-11-01

    Dense compressible particle-laden flow, which has a complex nature, exists in various engineering applications. Shock waves impacting a particle cloud is a canonical problem to investigate this type of flow. It has been demonstrated that large flow unsteadiness is generated inside the particle cloud from the flow induced by the shock passage. It is desirable to develop models for the Reynolds stress to capture the energy contained in vortical structures so that volume-averaged models with point particles can be simulated accurately. However, the previous work used Euler equations, which makes the prediction of vorticity generation and propagation innacurate. In this work, a fully resolved two dimensional (2D) simulation using the compressible Navier-Stokes equations with a volume penalization method to model the particles has been performed with the parallel adaptive wavelet-collocation method. The results still show large unsteadiness inside and downstream of the particle cloud. A 1D model is created for the unclosed terms based upon these 2D results. The 1D model uses a two-phase simple low dissipation AUSM scheme (TSLAU) developed by coupled with the compressible two phase kinetic energy equation.

  11. Physical Models of Layered Polar Firn Brightness Temperatures from 0.5 to 2 GHz

    NASA Technical Reports Server (NTRS)

    Tan, Shurun; Aksoy, Mustafa; Brogioni, Marco; Macelloni, Giovanni; Durand, Michael; Jezek, Kenneth C.; Wang, Tian-Lin; Tsang, Leung; Johnson, Joel T.; Drinkwater, Mark R.; hide

    2015-01-01

    We investigate physical effects influencing 0.5-2 GHz brightness temperatures of layered polar firn to support the Ultra Wide Band Software Defined Radiometer (UWBRAD) experiment to be conducted in Greenland and in Antarctica. We find that because ice particle grain sizes are very small compared to the 0.5-2 GHz wavelengths, volume scattering effects are small. Variations in firn density over cm- to m-length scales, however, cause significant effects. Both incoherent and coherent models are used to examine these effects. Incoherent models include a 'cloud model' that neglects any reflections internal to the ice sheet, and the DMRT-ML and MEMLS radiative transfer codes that are publicly available. The coherent model is based on the layered medium implementation of the fluctuation dissipation theorem for thermal microwave radiation from a medium having a nonuniform temperature. Density profiles are modeled using a stochastic approach, and model predictions are averaged over a large number of realizations to take into account an averaging over the radiometer footprint. Density profiles are described by combining a smooth average density profile with a spatially correlated random process to model density fluctuations. It is shown that coherent model results after ensemble averaging depend on the correlation lengths of the vertical density fluctuations. If the correlation length is moderate or long compared with the wavelength (approximately 0.6x longer or greater for Gaussian correlation function without regard for layer thinning due to compaction), coherent and incoherent model results are similar (within approximately 1 K). However, when the correlation length is short compared to the wavelength, coherent model results are significantly different from the incoherent model by several tens of kelvins. For a 10-cm correlation length, the differences are significant between 0.5 and 1.1 GHz, and less for 1.1-2 GHz. Model results are shown to be able to match the v-pol SMOS data closely and predict the h-pol data for small observation angles.

  12. An International Survey of Industrial Applications of Formal Methods. Volume 2. Case Studies

    DTIC Science & Technology

    1993-09-30

    impact of the product on IBM revenues. 4. Error rates were claimed to be below industrial average and errors were minimal to fix. Formal methods, as...critical applications. These include: 3 I I International Survey of Industrial Applications 41 i) "Software failures, particularly under first use, seem...project to add improved modelling capability. I U International Survey of Industrial Applications 93 I Design and Implementation These products are being

  13. Broadcasting but not receiving: density dependence considerations for SETI signals

    NASA Astrophysics Data System (ADS)

    Smith, Reginald D.

    2009-04-01

    This paper develops a detailed quantitative model which uses the Drake equation and an assumption of an average maximum radio broadcasting distance by an communicative civilization. Using this basis, it estimates the minimum civilization density for contact between two civilizations to be probable in a given volume of space under certain conditions, the amount of time it would take for a first contact, and the question of whether reciprocal contact is possible.

  14. Optimization design and performance analysis of a miniature stirling engine

    NASA Astrophysics Data System (ADS)

    You, Zhanping; Yang, Bo; Pan, Lisheng; Hao, Changsheng

    2017-10-01

    Under given operation conditions, a stirling engine of 2 kW is designed which takes hydrogen as working medium. Through establishment of adiabatic model, the ways are achieved about performance improving. The ways are raising the temperature of hot terminal, lowering the temperature of cold end, increasing the average cycle pressure, speeding up the speed, phase angle being 90°, stroke volume ratio approximating to 1 and increasing the performance of regenerator.

  15. Electronic Structure Methods Based on Density Functional Theory

    DTIC Science & Technology

    2010-01-01

    0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...chapter in the ASM Handbook , Volume 22A: Fundamentals of Modeling for Metals Processing, 2010. PAO Case Number: 88ABW-2009-3258; Clearance Date: 16 Jul...are represented using a linear combination, or basis, of plane waves. Over time several methods were developed to avoid the large number of planewaves

  16. Evaluation of an integrated orbital tissue expander in congenital anophthalmos: report of preliminary clinical experience.

    PubMed

    Tse, David T; Abdulhafez, Mohammad; Orozco, Marcia A; Tse, Jeffrey D; Azab, Amr Osama; Pinchuk, Leonard

    2011-03-01

    To evaluate the effectiveness of an orbital tissue expander designed to stimulate orbital bone growth in an anophthalmic socket. Retrospective, noncomparative, interventional case series. Institutional. Nine consecutive patients with unilateral congenital anophthalmos. The orbital tissue expander is made of an inflatable silicone globe sliding on a titanium T-plate secured to the lateral orbital rim with screws. The globe is inflated by a transconjunctival injection of normal saline through a 30-gauge needle to a final volume of approximately 5 cm(3). Computed tomography scans were used to determine the orbital volume. The data studied were: demographics, prior orbital expansion procedures, secondary interventions, orbital symmetry, and implant-related complications. The primary outcome measure was the orbital volume change, and the secondary outcome measures were changes in forehead, brow, and zygomatic eminence contour and adverse events. The average patient age at implantation was 41.89 ± 39.42 months (range, 9 to 108 months). The initial average volume of inflation was 3.00 ± 0.87 cm(3) (range, 2.0 to 4.0 cm(3)), and the average final volume of 4.33 ± 0.50 cm(3) (range, 4.0 to 5.0 cm(3)) was achieved. The duration of expansion was 18.89 ± 8.80 months (range, 4 to 26 months). All patients demonstrated an average increase in the orbital tissue expander implanted orbital volume of 5.112 ± 2.173 cm(3) (range, 2.81 to 10.38 cm(3)). The average difference between the volume of the implanted and the initial contralateral orbit was 5.68 ± 2.34 cm(3), which decreased to 2.53 ± 1.80 cm(3) at the final measurement (P < .001, paired t test). All implants remained inflated except for 2 iatrogenic punctures at the second inflation and 1 that was the result of implant failure. All were replaced. The integrated orbital tissue expander is safe and effective in stimulating anophthalmic socket bone growth. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. A decision-making tool to determine economic feasibility and break-even prices for artisan cheese operations.

    PubMed

    Durham, Catherine A; Bouma, Andrea; Meunier-Goddik, Lisbeth

    2015-12-01

    Artisan cheese makers lack access to valid economic data to help them evaluate business opportunities and make important business decisions such as determining cheese pricing structure. The objective of this study was to utilize an economic model to evaluate the net present value (NPV), internal rate of return, and payback period for artisan cheese production at different annual production volumes. The model was also used to determine the minimum retail price necessary to ensure positive NPV for 5 different cheese types produced at 4 different production volumes. Milk type, cheese yield, and aging time all affected variable costs. However, aged cheeses required additional investment for aging space (which needs to be larger for longer aging times), as did lower yield cheeses (by requiring larger-volume equipment for pasteurization and milk handling). As the volume of milk required increased, switching from vat pasteurization to high-temperature, short-time pasteurization was necessary for low-yield cheeses before being required for high-yield cheeses, which causes an additional increase in investment costs. Because of these differences, high-moisture, fresh cow milk cheeses can be sold for about half the price of hard, aged goat milk cheeses at the largest production volume or for about two-thirds the price at the lowest production volume examined. For example, for the given model assumptions, at an annual production of 13,608kg of cheese (30,000 lb), a fresh cow milk mozzarella should be sold at a minimum retail price of $27.29/kg ($12.38/lb), whereas a goat milk Gouda needs a minimum retail price of $49.54/kg ($22.47/lb). Artisan cheese makers should carefully evaluate annual production volumes. Although larger production volumes decrease average fixed cost and improve production efficiency, production can reach volumes where it becomes necessary to sell through distributors. Because distributors might pay as little as 35% of retail price, the retail price needs to be higher to compensate. An artisan cheese company that has not achieved the recognition needed to achieve a premium price may not find distribution through distributors profitable. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. A semi-analytical description of protein folding that incorporates detailed geometrical information

    PubMed Central

    Suzuki, Yoko; Noel, Jeffrey K.; Onuchic, José N.

    2011-01-01

    Much has been done to study the interplay between geometric and energetic effects on the protein folding energy landscape. Numerical techniques such as molecular dynamics simulations are able to maintain a precise geometrical representation of the protein. Analytical approaches, however, often focus on the energetic aspects of folding, including geometrical information only in an average way. Here, we investigate a semi-analytical expression of folding that explicitly includes geometrical effects. We consider a Hamiltonian corresponding to a Gaussian filament with structure-based interactions. The model captures local features of protein folding often averaged over by mean-field theories, for example, loop contact formation and excluded volume. We explore the thermodynamics and folding mechanisms of beta-hairpin and alpha-helical structures as functions of temperature and Q, the fraction of native contacts formed. Excluded volume is shown to be an important component of a protein Hamiltonian, since it both dominates the cooperativity of the folding transition and alters folding mechanisms. Understanding geometrical effects in analytical formulae will help illuminate the consequences of the approximations required for the study of larger proteins. PMID:21721664

  19. Matter Lagrangian of particles and fluids

    NASA Astrophysics Data System (ADS)

    Avelino, P. P.; Sousa, L.

    2018-03-01

    We consider a model where particles are described as localized concentrations of energy, with fixed rest mass and structure, which are not significantly affected by their self-induced gravitational field. We show that the volume average of the on-shell matter Lagrangian Lm describing such particles, in the proper frame, is equal to the volume average of the trace T of the energy-momentum tensor in the same frame, independently of the particle's structure and constitution. Since both Lm and T are scalars, and thus independent of the reference frame, this result is also applicable to collections of moving particles and, in particular, to those which can be described by a perfect fluid. Our results are expected to be particularly relevant in the case of modified theories of gravity with nonminimal coupling to matter where the matter Lagrangian appears explicitly in the equations of motion of the gravitational and matter fields, such as f (R ,Lm) and f (R ,T ) gravity. In particular, they indicate that, in this context, f (R ,Lm) theories may be regarded as a subclass of f (R ,T ) gravity.

  20. Imaging quality evaluation method of pixel coupled electro-optical imaging system

    NASA Astrophysics Data System (ADS)

    He, Xu; Yuan, Li; Jin, Chunqi; Zhang, Xiaohui

    2017-09-01

    With advancements in high-resolution imaging optical fiber bundle fabrication technology, traditional photoelectric imaging system have become ;flexible; with greatly reduced volume and weight. However, traditional image quality evaluation models are limited by the coupling discrete sampling effect of fiber-optic image bundles and charge-coupled device (CCD) pixels. This limitation substantially complicates the design, optimization, assembly, and evaluation image quality of the coupled discrete sampling imaging system. Based on the transfer process of grayscale cosine distribution optical signal in the fiber-optic image bundle and CCD, a mathematical model of coupled modulation transfer function (coupled-MTF) is established. This model can be used as a basis for following studies on the convergence and periodically oscillating characteristics of the function. We also propose the concept of the average coupled-MTF, which is consistent with the definition of traditional MTF. Based on this concept, the relationships among core distance, core layer radius, and average coupled-MTF are investigated.

  1. Calculation of change in brain temperatures due to exposure to a mobile phone

    NASA Astrophysics Data System (ADS)

    Van Leeuwen, G. M. J.; Lagendijk, J. J. W.; Van Leersum, B. J. A. M.; Zwamborn, A. P. M.; Hornsleth, S. N.; Kotte, A. N. T. J.

    1999-10-01

    In this study we evaluated for a realistic head model the 3D temperature rise induced by a mobile phone. This was done numerically with the consecutive use of an FDTD model to predict the absorbed electromagnetic power distribution, and a thermal model describing bioheat transfer both by conduction and by blood flow. We calculated a maximum rise in brain temperature of 0.11 °C for an antenna with an average emitted power of 0.25 W, the maximum value in common mobile phones, and indefinite exposure. Maximum temperature rise is at the skin. The power distributions were characterized by a maximum averaged SAR over an arbitrarily shaped 10 g volume of approximately 1.6 W kg-1. Although these power distributions are not in compliance with all proposed safety standards, temperature rises are far too small to have lasting effects. We verified our simulations by measuring the skin temperature rise experimentally. Our simulation method can be instrumental in further development of safety standards.

  2. The cost of conservative synchronization in parallel discrete event simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor.

  3. CrystalMoM: a tool for modeling the evolution of Crystals Size Distributions in magmas with the Method of Moments

    NASA Astrophysics Data System (ADS)

    Colucci, Simone; de'Michieli Vitturi, Mattia; Landi, Patrizia

    2016-04-01

    It is well known that nucleation and growth of crystals play a fundamental role in controlling magma ascent dynamics and eruptive behavior. Size- and shape-distribution of crystal populations can affect mixture viscosity, causing, potentially, transitions between effusive and explosive eruptions. Furthermore, volcanic samples are usually characterized in terms of Crystal Size Distribution (CSD), which provide a valuable insight into the physical processes that led to the observed distributions. For example, a large average size can be representative of a slow magma ascent, and a bimodal CSD may indicate two events of nucleation, determined by two degassing events within the conduit. The Method of Moments (MoM), well established in the field of chemical engineering, represents a mesoscopic modeling approach that rigorously tracks the polydispersity by considering the evolution in time and space of integral parameters characterizing the distribution, the moments, by solving their transport differential-integral equations. One important advantage of this approach is that the moments of the distribution correspond to quantities that have meaningful physical interpretations and are directly measurable in natural eruptive products, as well as in experimental samples. For example, when the CSD is defined by the number of particles of size D per unit volume of the magmatic mixture, the zeroth moment gives the total number of crystals, the third moment gives the crystal volume fraction in the magmatic mixture and ratios between successive moments provide different ways to evaluate average crystal length. Tracking these quantities, instead of volume fraction only, will allow using, for example, more accurate viscosity models in numerical code for magma ascent. Here we adopted, for the first time, a quadrature based method of moments to track the temporal evolution of CSD in a magmatic mixture and we verified and calibrated the model again experimental data. We also show how the equations and the tool developed can be integrated in a magma ascent numerical model, with application to eruptive events occurred at Stromboli volcano (Italy).

  4. A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China.

    PubMed

    Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun

    2013-06-01

    Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 - 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 - 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to develop integrated policies and measures for waste management over the long term. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Dissolved oxygen analysis, TMDL model comparison, and particulate matter shunting—Preliminary results from three model scenarios for the Klamath River upstream of Keno Dam, Oregon

    USGS Publications Warehouse

    Sullivan, Annett B.; Rounds, Stewart A.; Deas, Michael L.; Sogutlugil, I. Ertugrul

    2012-01-01

    Efforts are underway to identify actions that would improve water quality in the Link River to Keno Dam reach of the Upper Klamath River in south-central Oregon. To provide further insight into water-quality improvement options, three scenarios were developed, run, and analyzed using previously calibrated CE-QUAL-W2 hydrodynamic and water-quality models. Additional scenarios are under development as part of this ongoing study. Most of these scenarios evaluate changes relative to a "current conditions" model, but in some cases a "natural conditions" model was used that simulated the reach without the effect of point and nonpoint sources and set Upper Klamath Lake at its Total Maximum Daily Load (TMDL) targets. These scenarios were simulated using a model developed by the U.S. Geological Survey (USGS) and Watercourse Engineering, Inc. for the years 2006–09, referred to here as the "USGS model." Another model of the reach was developed by Tetra Tech, Inc. for years 2000 and 2002 to support the Klamath River TMDL process; that model is referred to here as the "TMDL model." The three scenarios described in this report included (1) an analysis of whether this reach of the Upper Klamath River would be in compliance with dissolved oxygen standards if sources met TMDL allocations, (2) an application of more recent datasets to the TMDL model with comparison to results from the USGS model, and (3) an examination of the effect on dissolved oxygen in the Klamath River if particulate material were stopped from entering Klamath Project diversion canals. Updates and modifications to the USGS model are in progress, so in the future these scenarios will be reanalyzed with the updated model and the interim results presented here will be superseded. Significant findings from this phase of the investigation include: * The TMDL analysis used depth-averaged dissolved oxygen concentrations from model output for comparison with dissolved oxygen standards. The Oregon dissolved oxygen standards do not specify whether the numeric criteria are based on depth-averaged dissolved oxygen concentration; this was an interpretation of the standards rule by the Oregon Department of Environmental Quality (ODEQ). In this study, both depth-averaged and volume-averaged dissolved oxygen concentrations were calculated from model output. Results showed that modeled depth-averaged concentrations typically were lower than volume-averaged dissolved oxygen concentrations because depth-averaging gives a higher weight to small volume areas near the channel bottom that often have lower dissolved oxygen concentrations. Results from model scenarios in this study are reported using volume-averaged dissolved oxygen concentrations. * Under all scenarios analyzed, violations of the dissolved oxygen standard occurred most often in summer. Of the three dissolved oxygen criteria that must be met, the 30-day standard was violated most frequently. Under the base case (current conditions), fewer violations occurred in the upstream part of the reach. More violations occurred in the down-stream direction, due in part to oxygen demand from the decay of algae and organic matter from Link River and other inflows. * A condition in which Upper Klamath Lake and its Link River outflow achieved Upper Klamath Lake TMDL water-quality targets was most effective in reducing the number of violations of the dissolved oxygen standard in the Link River to Keno Dam reach of the Klamath River. The condition in which point and nonpoint sources within the Link River to Keno Dam reach met Klamath River TMDL allocations had no effect on dissolved oxygen compliance in some locations and a small effect in others under current conditions. On the other hand, meeting TMDL allocations for nonpoint and point sources was predicted to be important in meeting dissolved oxygen criteria when Upper Klamath Lake and Link River also met Upper Klamath TMDL water-quality targets. * The location of greatest dissolved oxygen improvement from nutrient and organic matter reductions was downstream from point and nonpoint source inflows because time and distance are required for decay to occur and for oxygen demand to be exerted. * After assessing compliance with dissolved oxygen standards at all 102 model segments in the Link River to Keno Dam reach, it was determined that the seven locations used by ODEQ appear to be a representative subset of the reach for dissolved oxygen analysis. * The USGS and TMDL models were qualitatively compared by running both models for the 2006–09 period but preserving the essential characteristics of each, such as organic matter partitioning, bathymetric representation, and parameter rates. The analysis revealed that some constituents were not greatly affected by the differing algorithms, rates, and assumptions in the two models. Conversely, other constituents, especially organic matter, were simulated differently by the two models. Organic matter in this river system is best represented by a mixture of relatively labile particulate material and a substantial concentration of refractory dissolved material. In addition, the use of a first-order sediment oxygen demand, as in the USGS model, helps to capture the seasonal and dynamic effect of settled organic and algal material. * Simulation of shunting (diverting) particulate material away from the intake of four Klamath Project diversion canals, so that the material stayed in the river and out of the Project area, caused higher concentrations of particulate material to occur in the river. In all cases modeled, the increase in in-river particulate material also produced decreased dissolved oxygen concentrations and an increase in the number of days when dissolved oxygen standards were violated. * If particulate material were shunted back into the river at the Klamath Project diversion canals, less organic matter and nutrients would be taken into the Klamath Project area and the Lost River basin, resulting in return flows to the Klamath River via Lost River Diversion Channel that may have reduced nutrient concentrations. Model scenarios bracketing potential end-member nutrient concentrations showed that the composition of the return flows had little to no effect on dissolved oxygen compliance under simulated conditions.

  6. A model for light distribution and average solar irradiance inside outdoor tubular photobioreactors for the microalgal mass culture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernandez, F.G.A.; Camacho, F.G.; Perez, J.A.S.

    1997-09-05

    A mathematical model to estimate the solar irradiance profile and average light intensity inside a tubular photobioreactor under outdoor conditions is proposed, requiring only geographic, geometric, and solar position parameters. First, the length of the path into the culture traveled by any direct or disperse ray of light was calculated as the function of three variables: day of year, solar hour, and geographic latitude. Then, the phenomenon of light attenuation by biomass was studied considering Lambert-Beer`s law (only considering absorption) and the monodimensional model of Cornet et al. (1900) (considering absorption and scattering phenomena). Due to the existence of differentialmore » wavelength absorption, none of the literature models are useful for explaining light attenuation by the biomass. Therefore, an empirical hyperbolic expression is proposed. The equations to calculate light path length were substituted in the proposed hyperbolic expression, reproducing light intensity data obtained in the center of the loop tubes. The proposed model was also likely to estimate the irradiance accurately at any point inside the culture. Calculation of the local intensity was thus extended to the full culture volume in order to obtain the average irradiance, showing how the higher biomass productivities in a Phaeodactylum tricornutum UTEX 640 outdoor chemostat culture could be maintained by delaying light limitation.« less

  7. A depth-averaged debris-flow model that includes the effects of evolving dilatancy. I. physical basis

    USGS Publications Warehouse

    Iverson, Richard M.; George, David L.

    2014-01-01

    To simulate debris-flow behaviour from initiation to deposition, we derive a depth-averaged, two-phase model that combines concepts of critical-state soil mechanics, grain-flow mechanics and fluid mechanics. The model's balance equations describe coupled evolution of the solid volume fraction, m, basal pore-fluid pressure, flow thickness and two components of flow velocity. Basal friction is evaluated using a generalized Coulomb rule, and fluid motion is evaluated in a frame of reference that translates with the velocity of the granular phase, vs. Source terms in each of the depth-averaged balance equations account for the influence of the granular dilation rate, defined as the depth integral of ∇⋅vs. Calculation of the dilation rate involves the effects of an elastic compressibility and an inelastic dilatancy angle proportional to m−meq, where meq is the value of m in equilibrium with the ambient stress state and flow rate. Normalization of the model equations shows that predicted debris-flow behaviour depends principally on the initial value of m−meq and on the ratio of two fundamental timescales. One of these timescales governs downslope debris-flow motion, and the other governs pore-pressure relaxation that modifies Coulomb friction and regulates evolution of m. A companion paper presents a suite of model predictions and tests.

  8. A Phase Field Study of the Effect of Microstructure Grain Size Heterogeneity on Grain Growth

    NASA Astrophysics Data System (ADS)

    Crist, David J. D.

    Recent studies conducted with sharp-interface models suggest a link between the spatial distribution of grain size variance and average grain growth rate. This relationship and its effect on grain growth rate was examined using the diffuse-interface Phase Field Method on a series of microstructures with different degrees of grain size gradation. Results from this work indicate that the average grain growth rate has a positive correlation with the average grain size dispersion for phase field simulations, confirming previous observations. It is also shown that the grain growth rate in microstructures with skewed grain size distributions is better measured through the change in the volume-weighted average grain size than statistical mean grain size. This material is based upon work supported by the National Science Foundation under Grant No. 1334283. The NSF project title is "DMREF: Real Time Control of Grain Growth in Metals" and was awarded by the Civil, Mechanical and Manufacturing Innovation division under the Designing Materials to Revolutionize and Engineer our Future (DMREF) program.

  9. A thermomechanical constitutive model for cemented granular materials with quantifiable internal variables. Part I-Theory

    NASA Astrophysics Data System (ADS)

    Tengattini, Alessandro; Das, Arghya; Nguyen, Giang D.; Viggiani, Gioacchino; Hall, Stephen A.; Einav, Itai

    2014-10-01

    This is the first of two papers introducing a novel thermomechanical continuum constitutive model for cemented granular materials. Here, we establish the theoretical foundations of the model, and highlight its novelties. At the limit of no cement, the model is fully consistent with the original Breakage Mechanics model. An essential ingredient of the model is the use of measurable and micro-mechanics based internal variables, describing the evolution of the dominant inelastic processes. This imposes a link between the macroscopic mechanical behavior and the statistically averaged evolution of the microstructure. As a consequence this model requires only a few physically identifiable parameters, including those of the original breakage model and new ones describing the cement: its volume fraction, its critical damage energy and bulk stiffness, and the cohesion.

  10. Viscosity Prediction for Petroleum Fluids Using Free Volume Theory and PC-SAFT

    NASA Astrophysics Data System (ADS)

    Khoshnamvand, Younes; Assareh, Mehdi

    2018-04-01

    In this study, free volume theory ( FVT) in combination with perturbed-chain statistical associating fluid theory is implemented for viscosity prediction of petroleum reservoir fluids containing ill-defined components such as cuts and plus fractions. FVT has three adjustable parameters for each component to calculate viscosity. These three parameters for petroleum cuts (especially plus fractions) are not available. In this work, these parameters are determined for different petroleum fractions. A model as a function of molecular weight and specific gravity is developed using 22 real reservoir fluid samples with API grades in the range of 22 to 45. Afterward, the proposed model accuracy in comparison with the accuracy of De la Porte et al. with reference to experimental data is presented. The presented model is used for six real samples in an evaluation step, and the results are compared with available experimental data and the method of De la Porte et al. Finally, the method of Lohrenz et al. and the method of Pedersen et al. as two common industrial methods for viscosity calculation are compared with the proposed approach. The absolute average deviation was 9.7 % for free volume theory method, 15.4 % for Lohrenz et al., and 22.16 for Pedersen et al.

  11. Acoustically accessible window determination for ultrasound mediated treatment of glycogen storage disease type Ia patients

    NASA Astrophysics Data System (ADS)

    Wang, Shutao; Raju, Balasundar I.; Leyvi, Evgeniy; Weinstein, David A.; Seip, Ralf

    2012-10-01

    Glycogen storage disease type Ia (GSDIa) is caused by an inherited single-gene defect resulting in an impaired glycogen to glucose conversion pathway. Targeted ultrasound mediated delivery (USMD) of plasmid DNA (pDNA) to liver in conjunction with microbubbles may provide a potential treatment for GSDIa patients. As the success of USMD treatments is largely dependent on the accessibility of the targeted tissue by the focused ultrasound beam, this study presents a quantitative approach to determine the acoustically accessible liver volume in GSDIa patients. Models of focused ultrasound beam profiles for transducers of varying aperture and focal lengths were applied to abdomen models reconstructed from suitable CT and MRI images. Transducer manipulations (simulating USMD treatment procedures) were implemented via transducer translations and rotations with the intent of targeting and exposing the entire liver to ultrasound. Results indicate that acoustically accessible liver volumes can be as large as 50% of the entire liver volume for GSDIa patients and on average 3 times larger compared to a healthy adult group due to GSDIa patients' increased liver size. Detailed descriptions of the evaluation algorithm, transducer-and abdomen models are presented, together with implications for USMD treatments of GSDIa patients and transducer designs for USMD applications.

  12. Physicians in private practice: reasons for being a social franchise member.

    PubMed

    Huntington, Dale; Mundy, Gary; Hom, Nang Mo; Li, Qingfeng; Aung, Tin

    2012-08-01

    Evidence is emerging on the cost-effectiveness, quality and health coverage of social franchises. But little is known about the motivations of providers to join or remain within a social franchise network, or the impact that franchise membership has on client volumes or revenue earnings. (i) Uncontrolled facility based of a random sample of 230 franchise members to assess self-reported motivations; (ii) A 24 month prospective cohort study of 3 cohorts of physicians who had been in the franchise for 4 years, 2 years and new members to track monthly case load and revenue generated. The most common reasons for joining the franchise were access to high quality and cheap drugs (96.1%) and feelings of social responsibility, (95.2%). The effects of joining the franchise on the volume of family planning services is shown in the 2009 cohort where the average monthly service volume increased from 18.5 per physician to 70.6 per physician during their first 2 years in the franchise, (p<0.01). These gains are sustained during the 3rd and 4th year of franchise membership, as the 2007 cohort reported increases of monthly average family planning service volume from 71.2 per physician to 102.8 per physician (p<0.01). The net income of cohort 2009 increased significantly (p=0.024) during their first two years in the franchise. The results for cohorts 2007 and 2005 also show a generalized trend in increasing income. The findings show how franchise membership impacts the volume of franchise and non-franchised services. The increases in client volumes translated directly into increases in earnings among the franchise members, an unanticipated effect for providers who joined in order to better serve the poor. This finding has implications for the social franchise business model that relies upon subsidized medical products to reduce financial barriers for the poor. The increases in out of pocket payments for health care services that were not price controlled by the franchise is a concern. As the field of social franchises continues to mature its business models towards more sustainable and cost recovery management practices, attention should be given towards avoiding commercialization of services.

  13. Physicians in private practice: reasons for being a social franchise member

    PubMed Central

    2012-01-01

    Background Evidence is emerging on the cost-effectiveness, quality and health coverage of social franchises. But little is known about the motivations of providers to join or remain within a social franchise network, or the impact that franchise membership has on client volumes or revenue earnings. Methods (i) Uncontrolled facility based of a random sample of 230 franchise members to assess self-reported motivations; (ii) A 24 month prospective cohort study of 3 cohorts of physicians who had been in the franchise for 4 years, 2 years and new members to track monthly case load and revenue generated. Results The most common reasons for joining the franchise were access to high quality and cheap drugs (96.1%) and feelings of social responsibility, (95.2%). The effects of joining the franchise on the volume of family planning services is shown in the 2009 cohort where the average monthly service volume increased from 18.5 per physician to 70.6 per physician during their first 2 years in the franchise, (p<0.01). These gains are sustained during the 3rd and 4th year of franchise membership, as the 2007 cohort reported increases of monthly average family planning service volume from 71.2 per physician to 102.8 per physician (p<0.01). The net income of cohort 2009 increased significantly (p=0.024) during their first two years in the franchise. The results for cohorts 2007 and 2005 also show a generalized trend in increasing income. Conclusions The findings show how franchise membership impacts the volume of franchise and non-franchised services. The increases in client volumes translated directly into increases in earnings among the franchise members, an unanticipated effect for providers who joined in order to better serve the poor. This finding has implications for the social franchise business model that relies upon subsidized medical products to reduce financial barriers for the poor. The increases in out of pocket payments for health care services that were not price controlled by the franchise is a concern. As the field of social franchises continues to mature its business models towards more sustainable and cost recovery management practices, attention should be given towards avoiding commercialization of services. PMID:22849434

  14. Numerical Investigation of a Model Scramjet Combustor Using DDES

    NASA Astrophysics Data System (ADS)

    Shin, Junsu; Sung, Hong-Gye

    2017-04-01

    Non-reactive flows moving through a model scramjet were investigated using a delayed detached eddy simulation (DDES), which is a hybrid scheme combining Reynolds averaged Navier-Stokes scheme and a large eddy simulation. The three dimensional Navier-Stokes equations were solved numerically on a structural grid using finite volume methods. An in-house was developed. This code used a monotonic upstream-centered scheme for conservation laws (MUSCL) with an advection upstream splitting method by pressure weight function (AUSMPW+) for space. In addition, a 4th order Runge-Kutta scheme was used with preconditioning for time integration. The geometries and boundary conditions of a scramjet combustor operated by DLR, a German aerospace center, were considered. The profiles of the lower wall pressure and axial velocity obtained from a time-averaged solution were compared with experimental results. Also, the mixing efficiency and total pressure recovery factor were provided in order to inspect the performance of the combustor.

  15. SU-F-T-253: Volumetric Comparison Between 4D CT Amplitude and Phase Binning Mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, G; Ma, R; Reyngold, M

    2016-06-15

    Purpose: Motion artifact in 4DCT images can affect radiation treatment quality. To identify the most robust and accurate binning method, we compare the volume difference between targets delineated on amplitude and phase binned 4DCT scans. Methods: Varian RPM system and CT scanner were used to acquire 4DCTs of a Quasar phantom with embedded cubic and spherical objects having superior-inferior motion. Eight patients’ respiration waveforms were used to drive the phantom. The 4DCT scan was reconstructed into 10 phase and 10 amplitude bins (2 mm slices). A scan of the static phantom was also acquired. For each waveform, sphere and cubemore » volumes were generated automatically on each phase using HU thresholding. Phase (amplitude) ITVs were the union of object volumes over all phase (amplitude) binned images. The sphere and cube volumes measured in the static phantom scan were V{sub sphere}=4.19cc and V{sub cube}=27.0cc. Volume difference (VD) and dice similarity coefficient (DSC) of the ITVs, and mean volume error (MVE) defined as the average target volume percentage difference between each phase image and the static image, were used to evaluate the performance of amplitude and phase binning. Results: Averaged over the eight breathing traces, the VD and DSC of the internal target volume (ITV) between amplitude and phase binning were 3.4%±3.2% (mean ± std) and 95.9%±2.1% for sphere; 2.1%±3.3% and 98.0% ±1.5% for cube, respectively.For all waveforms, the average sphere MVE of amplitude and phase binning was 6.5% ± 5.0% and 8.2%±6.3%, respectively; and the average cube MVE of amplitude and phase binning was 5.7%±3.5%and 12.9%±8.9%, respectively. Conclusion: ITV volume and spatial overlap as assessed by VD and DSC are similar between amplitude and phase binning. Compared to phase binning, amplitude binning results in lower MVE suggesting it is less susceptible to motion artifact.« less

  16. 17 CFR 240.12h-6 - Certification by a foreign private issuer regarding the termination of registration of a class of...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... certifying to the Commission on Form 15F (17 CFR 249.324) that: (1) The foreign private issuer has had... average daily trading volume of the subject class of securities in the United States for a recent 12-month period has been no greater than 5 percent of the average daily trading volume of that class of securities...

  17. Agricultural Workers in Central California. Volume 1: In 1989; Volume 2: Phase II, 1990-91. California Agricultural Studies, 90-8 and 91-5.

    ERIC Educational Resources Information Center

    Alvarado, Andrew J.; And Others

    Two surveys developed profiles of seasonal agricultural workers and their working conditions in central California. In 1989, a random sample of 347 seasonal workers was interviewed. The sample was 30 percent female and 87 percent Mexican-born. Average age was 35 years and average educational attainment was 5.9 years. Most had parents, spouses, or…

  18. Edaravone, a free radical scavenger, attenuates cerebral infarction and hemorrhagic infarction in rats with hyperglycemia.

    PubMed

    Okamura, Koichi; Tsubokawa, Tamiji; Johshita, Hiroo; Miyazaki, Hiroshi; Shiokawa, Yoshiaki

    2014-01-01

    Thrombolysis due to acute ischemic stroke is associated with the risk of hemorrhagic infarction, especially after reperfusion. Recent experimental studies suggest that the main mechanism contributing to hemorrhagic infarction is oxidative stress caused by disruption of the blood-brain barrier. Edaravone, a free radical scavenger, decreases oxidative stress, thereby preventing hemorrhagic infarction during ischemia and reperfusion. In this study, we investigated the effects of edaravone on hemorrhagic infarction in a rat model of hemorrhagic transformation. We used a previously established hemorrhagic transformation model of rats with hyperglycemia. Hyperglycemia was induced by intraperitoneal injection of glucose to all rats (n  =  20). The rats with hyperglycemia showed a high incidence of hemorrhagic infarction. Middle cerebral artery occlusion (MCAO) for 1.5 hours followed by reperfusion for 24 hours was performed in edaravone-treated rats (n  =  10) and control rats (n  =  10). Upon completion of reperfusion, both groups were evaluated for infarct size and hemorrhage volume and the results obtained were compared. Edaravone significantly decreased infarct volume, with the average infarct volume in the edaravone-treated rats (227.6 mm(3)) being significantly lower than that in the control rats (264.0 mm(3)). Edaravone treatment also decreased the postischemic hemorrhage volumes (53.4 mm(3) in edaravone-treated rats vs 176.4 mm(3) in controls). In addition, the ratio of hemorrhage volume to infarct volume was lower in the edaravone-treated rats (23.5%) than in the untreated rats (63.2%). Edaravone attenuates cerebral infarction and hemorrhagic infarction in rats with hyperglycemia.

  19. Relation of neural structure to persistently low academic achievement: a longitudinal study of children with differing birth weights.

    PubMed

    Clark, Caron A C; Fang, Hua; Espy, Kimberly Andrews; Filipek, Pauline A; Juranek, Jenifer; Bangert, Barbara; Hack, Maureen; Taylor, H Gerry

    2013-05-01

    This study examined the relation of cerebral tissue reductions associated with VLBW to patterns of growth in core academic domains. Children born <750 g, 750 to 1,499 g, or >2,500 g completed measures of calculation, mathematical problem solving, and word decoding at time points spanning middle childhood and adolescence. K. A. Espy, H. Fang, D. Charak, N. M. Minich, and H. G. Taylor (2009, Growth mixture modeling of academic achievement in children of varying birth weight risk, Neuropsychology, Vol. 23, pp. 460-474) used growth mixture modeling to identify two growth trajectories (clusters) for each academic domain: an average achievement trajectory and a persistently low trajectory. In this study, 97 of the same participants underwent magnetic resonance imaging (MRI) in late adolescence, and cerebral tissue volumes were used to predict the probability of low growth cluster membership for each domain. Adjusting for whole brain volume (wbv), each 1-cm(3) reduction in caudate volume was associated with a 1.7- to 2.1-fold increase in the odds of low cluster membership for each domain. Each 1-mm(2) decrease in corpus callosum surface area increased these odds approximately 1.02-fold. Reduced cerebellar white matter volume was associated specifically with low calculation and decoding growth, and reduced cerebral white matter volume was associated with low calculation growth. Findings were similar when analyses were confined to the VLBW groups. Reduced volume of structures involved in connectivity, executive attention, and motor control may contribute to heterogeneous academic trajectories among children with VLBW.

  20. A multicomponent tracer field experiment to measure the flow volume, surface area, and rectilinear spacing of fractures away from the wellbore

    NASA Astrophysics Data System (ADS)

    Cathles, L. M.; Sanford, W. E.; Hawkins, A.; Li, Y. V.

    2017-12-01

    The nature of flow in fractured porous media is important to almost all subsurface processes including oil and gas recovery, contaminant transport and remediation, CO2 sequestration, and geothermal heat extraction. One would like to know, under flowing conditions, the flow volume, surface area, effective aperture, and rectilinear spacing of fractures in a representative volume of rock away from the well bore, but no methods currently allow acquisition of this data. It could, however, be collected by deploying inert tracers with a wide range of aqueous diffusion constants (e.g., rapidly diffusing heat to non-diffusing nanoparticle) in the following fashion: The flow volume is defined by the heated volume measured by resistivity surveys. The fracture volume within this flow volume is indicate by the nanoparticle transit time. The average fracture spacing is indicated by the evolving thermal profile in the monitor and the production wells (measured by fiber optic cable), and by the retention of absorbing tracers. The average fracture aperture is determined by permeability measurements and the average fracture separation. We have proposed a field test to redundantly measure these fracture parameters in the fractured Dakota Sandstone where it approaches the surface in Ft Collins, Colorado. Five 30 m deep wells (an injection, production, and 3 monitor wells) cased to 20 m are proposed. The experiments will involve at least 9 different tracers. The planned field test and its potential significance will be described.

  1. 78 FR 35996 - Self-Regulatory Organizations; EDGX Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-14

    ... liquidity; (5) add the Midpoint Match Volume Tier (``MPM Volume Tier'') to Footnote 3 of the Exchange's fee... add liquidity to the Exchange absent Members qualifying for additional volume tiered pricing. \\5... average daily volume (``ADV'') to EDGX, then the Member would get the current rate of $0.0001 per share...

  2. Flow and axial dispersion in a sinusoidal-walled tube: Effects of inertial and unsteady flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richmond, Marshall C.; Perkins, William A.; Scheibe, Timothy D.

    2013-12-01

    Dispersion in porous media flows has been the subject of much experimental, theoretical and numerical study. Here we consider a wavy-walled tube (a three-dimensional tube with sinusoidally-varying diameter) as a simplified conceptualization of flow in porous media, where constrictions represent pore throats and expansions pore bodies. A theoretical model for effective (macroscopic) longitudinal dispersion in this system has been developed by volume averaging the microscale velocity field. Direct numerical simulation using computational fluid dynamics (CFD) methods was used to compute velocity fields by solving the Navier-Stokes equations, and also to numerically solve the volume averaging closure problem, for a rangemore » of Reynolds numbers (Re) spanning the low-Re to inertial flow regimes, including one simulation at Re = 449 for which unsteady flow was observed. Dispersion values were computed using both the volume averaging solution and a random walk particle tracking method, and results of the two methods were shown to be consistent. Our results are compared to experimental measurements of dispersion in porous media and to previous theoretical results for the low-Re, Stokes flow regime. In the steady inertial regime we observe an power-law increase in effective longitudinal dispersion (DL) with Re, consistent with previous results. This rapid rate of increase is caused by trapping of solute in expansions due to flow separation (eddies). For the unsteady case (Re = 449), the rate of increase of DL with Re was smaller than that observed at lower Re. Velocity fluctuations in this regime lead to increased rates of solute mass transfer between the core flow and separated flow regions, thus diminishing the amount of tailing caused by solute trapping in eddies and thereby reducing longitudinal dispersion.« less

  3. Lava effusion rate definition and measurement: a review

    USGS Publications Warehouse

    Calvari, Sonia; Dehn, Jonathan; Harris, A.

    2007-01-01

    Measurement of effusion rate is a primary objective for studies that model lava flow and magma system dynamics, as well as for monitoring efforts during on-going eruptions. However, its exact definition remains a source of confusion, and problems occur when comparing volume flux values that are averaged over different time periods or spatial scales, or measured using different approaches. Thus our aims are to: (1) define effusion rate terminology; and (2) assess the various measurement methods and their results. We first distinguish between instantaneous effusion rate, and time-averaged discharge rate. Eruption rate is next defined as the total volume of lava emplaced since the beginning of the eruption divided by the time since the eruption began. The ultimate extension of this is mean output rate, this being the final volume of erupted lava divided by total eruption duration. Whether these values are total values, i.e. the flux feeding all flow units across the entire flow field, or local, i.e. the flux feeding a single active unit within a flow field across which many units are active, also needs to be specified. No approach is without its problems, and all can have large error (up to ∼50%). However, good agreement between diverse approaches shows that reliable estimates can be made if each approach is applied carefully and takes into account the caveats we detail here. There are three important factors to consider and state when measuring, giving or using an effusion rate. First, the time-period over which the value was averaged; second, whether the measurement applies to the entire active flow field, or a single lava flow within that field; and third, the measurement technique and its accompanying assumptions.

  4. Electrokinetic coupling in unsaturated porous media.

    PubMed

    Revil, A; Linde, N; Cerepi, A; Jougnot, D; Matthäi, S; Finsterle, S

    2007-09-01

    We consider a charged porous material that is saturated by two fluid phases that are immiscible and continuous on the scale of a representative elementary volume. The wetting phase for the grains is water and the nonwetting phase is assumed to be an electrically insulating viscous fluid. We use a volume-averaging approach to derive the linear constitutive equations for the electrical current density as well as the seepage velocities of the wetting and nonwetting phases on the scale of a representative elementary volume. These macroscopic constitutive equations are obtained by volume-averaging Ampère's law together with the Nernst-Planck equation and the Stokes equations. The material properties entering the macroscopic constitutive equations are explicitly described as functions of the saturation of the water phase, the electrical formation factor, and parameters that describe the capillary pressure function, the relative permeability functions, and the variation of electrical conductivity with saturation. New equations are derived for the streaming potential and electro-osmosis coupling coefficients. A primary drainage and imbibition experiment is simulated numerically to demonstrate that the relative streaming potential coupling coefficient depends not only on the water saturation, but also on the material properties of the sample, as well as the saturation history. We also compare the predicted streaming potential coupling coefficients with experimental data from four dolomite core samples. Measurements on these samples include electrical conductivity, capillary pressure, the streaming potential coupling coefficient at various levels of saturation, and the permeability at saturation of the rock samples. We found very good agreement between these experimental data and the model predictions.

  5. The effect of increase in dielectric values on specific absorption rate (SAR) in eye and head tissues following 900, 1800 and 2450 MHz radio frequency (RF) exposure

    NASA Astrophysics Data System (ADS)

    Keshvari, Jafar; Keshvari, Rahim; Lang, Sakari

    2006-03-01

    Numerous studies have attempted to address the question of the RF energy absorption difference between children and adults using computational methods. They have assumed the same dielectric parameters for child and adult head models in SAR calculations. This has been criticized by many researchers who have stated that child organs are not fully developed, their anatomy is different and also their tissue composition is slightly different with higher water content. Higher water content would affect dielectric values, which in turn would have an effect on RF energy absorption. The objective of this study was to investigate possible variation in specific absorption rate (SAR) in the head region of children and adults by applying the finite-difference time-domain (FDTD) method and using anatomically correct child and adult head models. In the calculations, the conductivity and permittivity of all tissues were increased from 5 to 20% but using otherwise the same exposure conditions. A half-wave dipole antenna was used as an exposure source to minimize the uncertainties of the positioning of a real mobile device and making the simulations easily replicable. Common mobile telephony frequencies of 900, 1800 and 2450 MHz were used in this study. The exposures of ear and eye regions were investigated. The SARs of models with increased dielectric values were compared to the SARs of the models where dielectric values were unchanged. The analyses suggest that increasing the value of dielectric parameters does not necessarily mean that volume-averaged SAR would increase. Under many exposure conditions, specifically at higher frequencies in eye exposure, volume-averaged SAR decreases. An increase of up to 20% in dielectric conductivity or both conductivity and permittivity always caused a SAR variation of less than 20%, usually about 5%, when it was averaged over 1, 5 or 10 g of cubic mass for all models. The thickness and composition of different tissue layers in the exposed regions within the human head play a more significant role in SAR variation compared to the variations (5-20%) of the tissue dielectric parameters.

  6. Unconfined laminar nanofluid flow and heat transfer around a rotating circular cylinder in the steady regime

    NASA Astrophysics Data System (ADS)

    Bouakkaz, Rafik; Salhi, Fouzi; Khelili, Yacine; Quazzazi, Mohamed; Talbi, Kamel

    2017-06-01

    In this work, steady flow-field and heat transfer through a copper- water nanofluid around a rotating circular cylinder with a constant nondimensional rotation rate α varying from 0 to 5 was investigated for Reynolds numbers of 5-40. Furthermore, the range of nanoparticle volume fractions considered is 0-5%. The effect of volume fraction of nanoparticles on the fluid flow and heat transfer characteristics are carried out by using a finite-volume method based commercial computational fluid dynamics solver. The variation of the local and the average Nusselt numbers with Reynolds number, volume fractions, and rotation rate are presented for the range of conditions. The average Nusselt number is found to decrease with increasing value of the rotation rate for the fixed value of the Reynolds number and volume fraction of nanoparticles. In addition, rotation can be used as a drag reduction technique.

  7. Correlation of Apollo oxygen tank thermodynamic performance predictions

    NASA Technical Reports Server (NTRS)

    Patterson, H. W.

    1971-01-01

    Parameters necessary to analyze the stratified performance of the Apollo oxygen tanks include g levels, tank elasticity, flow rates and pressurized volumes. Methods for estimating g levels and flow rates from flight plans prior to flight, and from quidance and system data for use in the post flight analysis are described. Equilibrium thermodynamic equations are developed for the effects of tank elasticity and pressurized volumes on the tank pressure response and their relative magnitudes are discussed. Correlations of tank pressures and heater temperatures from flight data with the results of a stratification model are shown. Heater temperatures were also estimated with empirical heat transfer agreement with flight data when fluid properties were averaged rather than evaluated at the mean film temperature.

  8. Decaying Lava Extrusion Rate at El Reventador Volcano, Ecuador, Measured Using High-Resolution Satellite Radar

    NASA Astrophysics Data System (ADS)

    Arnold, D. W. D.; Biggs, J.; Anderson, K.; Vallejo Vargas, S.; Wadge, G.; Ebmeier, S. K.; Naranjo, M. F.; Mothes, P.

    2017-12-01

    Lava extrusion at erupting volcanoes causes rapid changes in topography and morphology on the order of tens or even hundreds of meters. Satellite radar provides a method for measuring changes in topographic height over a given time period to an accuracy of meters, either by measuring the width of radar shadow cast by steep sided features, or by measuring the difference in radar phase between two sensors separated in space. We measure height changes, and hence estimate extruded lava volume flux, at El Reventador, Ecuador, between 2011 and 2016, using data from the RADARSAT-2 and TanDEM-X satellite missions. We find that 39 new lava flows were extruded between 9 February 2012 and 24 August 2016, with a cumulative volume of 44.8M m3 dense rock equivalent and a gradually decreasing eruption rate. The average dense rock rate of lava extrusion during this time is 0.31 ± 0.02 m3 s-1, which is similar to the long-term average from 1972 to 2016. Apart from a volumetrically small dyke opening event between 9 March and 10 June 2012, lava extrusion at El Reventador is not accompanied by any significant magmatic ground deformation. We use a simple physics-based model to estimate that the volume of the magma reservoir under El Reventador is greater than 3 km3. Our lava extrusion data can be equally well fit by models representing a closed reservoir depressurising during the eruption with no magma recharge, or an open reservoir with a time-constant magma recharge rate of up to 0.35 ± 0.01 m3 s-1.

  9. Decaying lava extrusion rate at El Reventador Volcano, Ecuador measured using high-resolution satellite radar

    USGS Publications Warehouse

    Arnold, D. W. D.; Biggs, J.; Anderson, Kyle R.; Vallejo Vargas, S.; Wadge, G.; Ebmeier, S. K.; Naranjo, M. F.; Mothes, P.

    2017-01-01

    Lava extrusion at erupting volcanoes causes rapid changes in topography and morphology on the order of tens or even hundreds of meters. Satellite radar provides a method for measuring changes in topographic height over a given time period to an accuracy of meters, either by measuring the width of radar shadow cast by steep sided features, or by measuring the difference in radar phase between two sensors separated in space. We measure height changes, and hence estimate extruded lava volume flux, at El Reventador, Ecuador, between 2011 and 2016, using data from the RADARSAT-2 and TanDEM-X satellite missions. We find that 39 new lava flows were extruded between 9 February 2012 and 24 August 2016, with a cumulative volume of 44.8M m3 dense rock equivalent and a gradually decreasing eruption rate. The average dense rock rate of lava extrusion during this time is 0.31 ± 0.02 m3 s−1, which is similar to the long-term average from 1972 to 2016. Apart from a volumetrically small dyke opening event between 9 March and 10 June 2012, lava extrusion at El Reventador is not accompanied by any significant magmatic ground deformation. We use a simple physics-based model to estimate that the volume of the magma reservoir under El Reventador is greater than 3 km3. Our lava extrusion data can be equally well fit by models representing a closed reservoir depressurising during the eruption with no magma recharge, or an open reservoir with a time-constant magma recharge rate of up to 0.35 ± 0.01 m3 s−1.

  10. Should epidural drain be recommended after supratentorial craniotomy for epileptic patients?

    PubMed

    Guangming, Zhang; Huancong, Zuo; Wenjing, Zhou; Guoqiang, Chen; Xiaosong, Wang

    2009-08-01

    ED was once and is still commonly applied to prevent mainly EH and subgaleal CSF collection. We designed this study to observe if ED could decrease the incidence and volume of EH and subgaleal CSF collection after supratentorial craniotomy in epileptic patients. Three hundred forty-two epileptic patients were divided into 2 groups according to their first craniotomy date (group 1 in odd date and group 2 in even date). Patients in group 1 had ED and those in group 2 had no ED. The patient numbers and volumes of EH and subgaleal CSF collections in both groups were recorded and statistically analyzed. There were 22 EHs in group 1 and 20 EHs in group 2. There were 11 and 10 subgaleal CSF collections in groups 1 and 2, respectively. The average volume of EH was 13.5 +/- 8.12 and 14.65 +/- 7.72 mL in groups 1 and 2, respectively. The average volume of subgaleal CSF collection was 42.76 +/- 12.09 and 43.75 +/- 11.44 mL in groups 1 and 2, respectively. There were no statistical differences in the incidence and average volume of EH and subgaleal CSF collection between the 2 groups. ED cannot decrease the incidence and volume of EH and subgaleal CSF collection. ED should not be recommended after supratentorial epileptic craniotomy.

  11. The evaluation of tissue mass loss in the incision line of prostate with benign hyperplasia performed using holmium laser and cutting electrode.

    PubMed

    Szewczyk, Mariusz; Jesionek-Kupnicka, Dorota; Lipiński, Marek Ireneusz; Lipinski, Piotr; Różański, Waldemar

    2014-01-01

    The aim of this study is to compare the changes in the incision line of prostatic adenoma using a monopolar cutting electrode and holmium laser, as well as the assessment of associated tissue mass and volume loss of benign prostatic hyperplasia (BPH). The material used in this study consisted of 74 preparations of prostatic adenoma obtained via open retropubic adenomectomy, with an average volume of 120.7 ml. The material obtained cut in vitro before fixation in formaldehyde. One lobe was cut using holmium laser, the other using a monopolar cutting electrode. After the incision was made, tissue mass and volume loss were evaluated. Thermocoagulation changes in the incision line were examinedunder light microscope. In the case of the holmium laser incision, the average tissue mass loss was 1.73 g, tissue volume loss 3.57 ml and the depth of thermocoagulation was 1.17 mm. When the monopolar cutting electrode was used average tissue mass loss was 0.807 g, tissue volume loss 2.48 ml and the depth of thermocoagulation was 0.19 mm. Where holmium laser was used, it was observed that the layer of tissue with thermocoagulation changes was deeper than in the case of the monopolar cutting electrode. Moreover, it was noticed that holmium laser caused bigger tissue mass and volume loss than the cutting electrode.

  12. A statistical model describing combined irreversible electroporation and electroporation-induced blood-brain barrier disruption

    PubMed Central

    Sharabi, Shirley; Kos, Bor; Last, David; Guez, David; Daniels, Dianne; Harnof, Sagi; Miklavcic, Damijan

    2016-01-01

    Background Electroporation-based therapies such as electrochemotherapy (ECT) and irreversible electroporation (IRE) are emerging as promising tools for treatment of tumors. When applied to the brain, electroporation can also induce transient blood-brain-barrier (BBB) disruption in volumes extending beyond IRE, thus enabling efficient drug penetration. The main objective of this study was to develop a statistical model predicting cell death and BBB disruption induced by electroporation. This model can be used for individual treatment planning. Material and methods Cell death and BBB disruption models were developed based on the Peleg-Fermi model in combination with numerical models of the electric field. The model calculates the electric field thresholds for cell kill and BBB disruption and describes the dependence on the number of treatment pulses. The model was validated using in vivo experimental data consisting of rats brains MRIs post electroporation treatments. Results Linear regression analysis confirmed that the model described the IRE and BBB disruption volumes as a function of treatment pulses number (r2 = 0.79; p < 0.008, r2 = 0.91; p < 0.001). The results presented a strong plateau effect as the pulse number increased. The ratio between complete cell death and no cell death thresholds was relatively narrow (between 0.88-0.91) even for small numbers of pulses and depended weakly on the number of pulses. For BBB disruption, the ratio increased with the number of pulses. BBB disruption radii were on average 67% ± 11% larger than IRE volumes. Conclusions The statistical model can be used to describe the dependence of treatment-effects on the number of pulses independent of the experimental setup. PMID:27069447

  13. Subgrid or Reynolds stress-modeling for three-dimensional turbulence computations

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.

    1975-01-01

    A review is given of recent advances in two distinct computational methods for evaluating turbulence fields, namely, statistical Reynolds stress modeling and turbulence simulation, where large eddies are followed in time. It is shown that evaluation of the mean Reynolds stresses, rather than use of a scalar eddy viscosity, permits an explanation of streamline curvature effects found in several experiments. Turbulence simulation, with a new volume averaging technique and third-order accurate finite-difference computing is shown to predict the decay of isotropic turbulence in incompressible flow with rather modest computer storage requirements, even at Reynolds numbers of aerodynamic interest.

  14. Intra-patient semi-automated segmentation of the cervix-uterus in CT-images for adaptive radiotherapy of cervical cancer

    NASA Astrophysics Data System (ADS)

    Luiza Bondar, M.; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben

    2013-08-01

    For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.

  15. Intra-patient semi-automated segmentation of the cervix-uterus in CT-images for adaptive radiotherapy of cervical cancer.

    PubMed

    Bondar, M Luiza; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben

    2013-08-07

    For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.

  16. Preface

    NASA Astrophysics Data System (ADS)

    Faybishenko, Boris; Witherspoon, Paul A.; Gale, John

    How to characterize fluid flow, heat, and chemical transport in geologic media remains a central challenge for geoscientists and engineers worldwide. Investigations of fluid flow and transport within rock relate to such fundamental and applied problems as environmental remediation; nonaqueous phase liquid (NAPL) transport; exploitation of oil, gas, and geothermal resources; disposal of spent nuclear fuel; and geotechnical engineering. It is widely acknowledged that fractures in unsaturated-saturated rock can play a major role in solute transport from the land surface to underlying aquifers. It is also evident that general issues concerning flow and transport predictions in subsurface fractured zones can be resolved in a practical manner by integrating investigations into the physical nature of flow in fractures, developing relevant mathematical models and modeling approaches, and collecting site characterization data. Because of the complexity of flow and transport processes in most fractured rock flow problems, it is not yet possible to develop models directly from first principles. One reason for this is the presence of episodic, preferential water seepage and solute transport, which usually proceed more rapidly than expected from volume-averaged and time-averaged models. However, the physics of these processes is still known.

  17. Quantitative estimation of landslide risk from rapid debris slides on natural slopes in the Nilgiri hills, India

    NASA Astrophysics Data System (ADS)

    Jaiswal, P.; van Westen, C. J.; Jetten, V.

    2011-06-01

    A quantitative procedure for estimating landslide risk to life and property is presented and applied in a mountainous area in the Nilgiri hills of southern India. Risk is estimated for elements at risk located in both initiation zones and run-out paths of potential landslides. Loss of life is expressed as individual risk and as societal risk using F-N curves, whereas the direct loss of properties is expressed in monetary terms. An inventory of 1084 landslides was prepared from historical records available for the period between 1987 and 2009. A substantially complete inventory was obtained for landslides on cut slopes (1042 landslides), while for natural slopes information on only 42 landslides was available. Most landslides were shallow translational debris slides and debris flowslides triggered by rainfall. On natural slopes most landslides occurred as first-time failures. For landslide hazard assessment the following information was derived: (1) landslides on natural slopes grouped into three landslide magnitude classes, based on landslide volumes, (2) the number of future landslides on natural slopes, obtained by establishing a relationship between the number of landslides on natural slopes and cut slopes for different return periods using a Gumbel distribution model, (3) landslide susceptible zones, obtained using a logistic regression model, and (4) distribution of landslides in the susceptible zones, obtained from the model fitting performance (success rate curve). The run-out distance of landslides was assessed empirically using landslide volumes, and the vulnerability of elements at risk was subjectively assessed based on limited historic incidents. Direct specific risk was estimated individually for tea/coffee and horticulture plantations, transport infrastructures, buildings, and people both in initiation and run-out areas. Risks were calculated by considering the minimum, average, and maximum landslide volumes in each magnitude class and the corresponding minimum, average, and maximum run-out distances and vulnerability values, thus obtaining a range of risk values per return period. The results indicate that the total annual minimum, average, and maximum losses are about US 44 000, US 136 000 and US 268 000, respectively. The maximum risk to population varies from 2.1 × 10-1 for one or more lives lost to 6.0 × 10-2 yr-1 for 100 or more lives lost. The obtained results will provide a basis for planning risk reduction strategies in the Nilgiri area.

  18. Trends in Arctic Sea Ice Volume 2010-2013 from CryoSat-2

    NASA Astrophysics Data System (ADS)

    Tilling, R.; Ridout, A.; Wingham, D.; Shepherd, A.; Haas, C.; Farrell, S. L.; Schweiger, A. J.; Zhang, J.; Giles, K.; Laxon, S.

    2013-12-01

    Satellite records show a decline in Arctic sea ice extent over the past three decades with a record minimum in September 2012, and results from the Pan-Arctic Ice-Ocean Modelling and Assimilation System (PIOMAS) suggest that this has been accompanied by a reduction in volume. We use three years of measurements recorded by the European Space Agency CryoSat-2 (CS-2) mission, validated with in situ data, to generate estimates of seasonal variations and inter-annual trends in Arctic sea ice volume between 2010 and 2013. The CS-2 estimates of sea ice thickness agree with in situ estimates derived from upward looking sonar measurements of ice draught and airborne measurements of ice thickness and freeboard to within 0.1 metres. Prior to the record minimum in summer 2012, autumn and winter Arctic sea ice volume had fallen by ~1300 km3 relative to the previous year. Using the full 3-year period of CS-2 observations, we estimate that winter Arctic sea ice volume has decreased by ~700 km3/yr since 2010, approximately twice the average rate since 1980 as predicted by the PIOMAS.

  19. Model-based segmentation of abdominal aortic aneurysms in CTA images

    NASA Astrophysics Data System (ADS)

    de Bruijne, Marleen; van Ginneken, Bram; Niessen, Wiro J.; Loog, Marco; Viergever, Max A.

    2003-05-01

    Segmentation of thrombus in abdominal aortic aneurysms is complicated by regions of low boundary contrast and by the presence of many neighboring structures in close proximity to the aneurysm wall. We present an automated method that is similar to the well known Active Shape Models (ASM), combining a three-dimensional shape model with a one-dimensional boundary appearance model. Our contribution is twofold: we developed a non-parametric appearance modeling scheme that effectively deals with a highly varying background, and we propose a way of generalizing models of curvilinear structures from small training sets. In contrast with the conventional ASM approach, the new appearance model trains on both true and false examples of boundary profiles. The probability that a given image profile belongs to the boundary is obtained using k nearest neighbor (kNN) probability density estimation. The performance of this scheme is compared to that of original ASMs, which minimize the Mahalanobis distance to the average true profile in the training set. The generalizability of the shape model is improved by modeling the objects axis deformation independent of its cross-sectional deformation. A leave-one-out experiment was performed on 23 datasets. Segmentation using the kNN appearance model significantly outperformed the original ASM scheme; average volume errors were 5.9% and 46% respectively.

  20. Numerical Modeling of Nanocellular Foams Using Classical Nucleation Theory and Influence Volume Approach

    NASA Astrophysics Data System (ADS)

    Khan, Irfan; Costeux, Stephane; Bunker, Shana; Moore, Jonathan; Kar, Kishore

    2012-11-01

    Nanocellular porous materials present unusual optical, dielectric, thermal and mechanical properties and are thus envisioned to find use in a variety of applications. Thermoplastic polymeric foams show considerable promise in achieving these properties. However, there are still considerable challenges in achieving nanocellular foams with densities as low as conventional foams. Lack of in-depth understanding of the effect of process parameters and physical properties on the foaming process is a major obstacle. A numerical model has been developed to simulate the simultaneous nucleation and bubble growth during depressurization of thermoplastic polymers saturated with supercritical blowing agents. The model is based on the popular ``Influence Volume Approach,'' which assumes a growing boundary layer with depleted blowing agent surrounds each bubble. Classical nucleation theory is used to predict the rate of nucleation of bubbles. By solving the mass balance, momentum balance and species conservation equations for each bubble, the model is capable of predicting average bubble size, bubble size distribution and bulk porosity. The model is modified to include mechanisms for Joule-Thompson cooling during depressurization and secondary foaming. Simulation results for polymer with and without nucleating agents will be discussed and compared with experimental data.

  1. A Hele-Shaw-Cahn-Hilliard Model for Incompressible Two-Phase Flows with Different Densities

    NASA Astrophysics Data System (ADS)

    Dedè, Luca; Garcke, Harald; Lam, Kei Fong

    2017-07-01

    Topology changes in multi-phase fluid flows are difficult to model within a traditional sharp interface theory. Diffuse interface models turn out to be an attractive alternative to model two-phase flows. Based on a Cahn-Hilliard-Navier-Stokes model introduced by Abels et al. (Math Models Methods Appl Sci 22(3):1150013, 2012), which uses a volume-averaged velocity, we derive a diffuse interface model in a Hele-Shaw geometry, which in the case of non-matched densities, simplifies an earlier model of Lee et al. (Phys Fluids 14(2):514-545, 2002). We recover the classical Hele-Shaw model as a sharp interface limit of the diffuse interface model. Furthermore, we show the existence of weak solutions and present several numerical computations including situations with rising bubbles and fingering instabilities.

  2. The influence of gravity on the formation of amyloplasts in columella cells of Zea mays L

    NASA Technical Reports Server (NTRS)

    Moore, R.; Fondren, W. M.; Koon, E. C.; Wang, C. L.

    1986-01-01

    Columella (i.e., putative graviperceptive) cells of Zea mays seedlings grown in the microgravity of outer space allocate significantly less volume to putative statoliths (amyloplasts) than do columella cells of Earth-grown seedlings. Amyloplasts of flight-grown seedlings are significantly smaller than those of ground controls, as is the average volume of individual starch grains. Similarly, the relative volume of starch in amyloplasts in columella cells of flight-grown seedlings is significantly less than that of Earth-grown seedlings. Microgravity does not significantly alter the volume of columella cells, the average number of amyloplasts per columella cell, or the number of starch grains per amyloplast. These results are discussed relative to the influence of gravity on cellular and organellar structure.

  3. Proteins QSAR with Markov average electrostatic potentials.

    PubMed

    González-Díaz, Humberto; Uriarte, Eugenio

    2005-11-15

    Classic physicochemical and topological indices have been largely used in small molecules QSAR but less in proteins QSAR. In this study, a Markov model is used to calculate, for the first time, average electrostatic potentials xik for an indirect interaction between aminoacids placed at topologic distances k within a given protein backbone. The short-term average stochastic potential xi1 for 53 Arc repressor mutants was used to model the effect of Alanine scanning on thermal stability. The Arc repressor is a model protein of relevance for biochemical studies on bioorganics and medicinal chemistry. A linear discriminant analysis model developed correctly classified 43 out of 53, 81.1% of proteins according to their thermal stability. More specifically, the model classified 20/28, 71.4% of proteins with near wild-type stability and 23/25, 92.0% of proteins with reduced stability. Moreover, predictability in cross-validation procedures was of 81.0%. Expansion of the electrostatic potential in the series xi0, xi1, xi2, and xi3, justified the use of the abrupt truncation approach, being the overall accuracy >70.0% for xi0 but equal for xi1, xi2, and xi3. The xi1 model compared favorably with respect to others based on D-Fire potential, surface area, volume, partition coefficient, and molar refractivity, with less than 77.0% of accuracy [Ramos de Armas, R.; González-Díaz, H.; Molina, R.; Uriarte, E. Protein Struct. Func. Bioinf.2004, 56, 715]. The xi1 model also has more tractable interpretation than others based on Markovian negentropies and stochastic moments. Finally, the model is notably simpler than the two models based on quadratic and linear indices. Both models, reported by Marrero-Ponce et al., use four-to-five time more descriptors. Introduction of average stochastic potentials may be useful for QSAR applications; having xik amenable physical interpretation and being very effective.

  4. U.S. Army War College Guide to National Security Issues. Volume 2. National Security Policy and Strategy

    DTIC Science & Technology

    2012-06-01

    1998 National War College paper entitled “U.S. National Se- curity Structure: A New Model for the 21st Century” defines the national security community ...fueled by revolu- tions in communications and information management, the emergence of a truly global market and world economy, the primacy of economic...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions , searching existing data sources

  5. Regional Myocardial Blood Volume and Flow: First-Pass MR Imaging with Polylysine-Gd-DTPA

    PubMed Central

    Wilke, Norbert; Kroll, Keith; Merkle, Hellmut; Wang, Ying; Ishibashi, Yukata; Xu, Ya; Zhang, Jiani; Jerosch-Herold, Michael; Mühler, Andreas; Stillman, Arthur E.; Bassingthwaighte, James B.; Bache, Robert; Ugurbil, Kamil

    2010-01-01

    The authors investigated the utility of an intravascular magnetic resonance (MR) contrast agent, poly-L-lysine-gadolinium diethylenetriaminepentaacetic acid (DTPA), for differentiating acutely ischemic from normally perfused myocardium with first-pass MR imaging. Hypoperfused regions, identified with microspheres, on the first-pass images displayed significantly decreased signal intensities compared with normally perfused myocardium (P < .0007). Estimates of regional myocardial blood content, obtained by measuring the ratio of areas under the signal intensity-versus-time curves in tissue regions and the left ventricular chamber, averaged 0.12 mL/g ± 0.04 (n = 35), compared with a value of 0.11 mL/g ± 0.05 measured with radiolabeled albumin in the same tissue regions. To obtain MR estimates of regional myocardial blood flow, in situ calibration curves were used to transform first-pass intensity-time curves into content-time curves for analysis with a multiple-pathway, axially distributed model. Flow estimates, obtained by automated parameter optimization, averaged 1.2 mL/min/g ± 0.5 [n = 29), compared with 1.3 mL/min/g ± 0.3 obtained with tracer microspheres in the same tissue specimens at the same time. The results represent a combination of T1-weighted first-pass imaging, intravascular relaxation agents, and a spatially distributed perfusion model to obtain absolute regional myocardial blood flow and volume. PMID:7766986

  6. Effect of corneal inhomogeneity on the mechanical behavior of the eye

    NASA Astrophysics Data System (ADS)

    Stein, A. A.; Moiseeva, I. N.

    2018-05-01

    The effect of spatial inhomogeneity of the effective cornea stiffness distribution on the mechanical properties of the eye is investigated on the basis of the two-component model of the eyeball, in which the cornea is represented by a momentless deformable, linearly elastic surface and the scleral region by an elastic element that responds to changes in intraocular pressure by changes in volume. The approach used makes it possible to consider within the same model both the natural corneal inhomogeneity and mechanical consequences of local cornea weakening owing to surgical procedures. The dependences on changes in intraocular pressure of parameters that characterize deformation properties of both the cornea (apex displacement) and the eyeball as a whole (change in intraocular volume) are obtained. For moderate inhomogeneity they differ from the same dependences for the homogenous cornea with effective stiffness equal to the average value for the corresponding inhomogeneous distribution only slightly. However, if the effective stiffness amplitude is very high, corneal inhomogeneity discernibly affects the integral response of the cornea and the eyeball as a whole to changes in pressure. The effect of inhomogeneity on the data of tonometry also mainly depends on the average effective corneal stiffness. The difference between the tonometric and true pressures increases with surgical cornea weakening in the apical region for both Schiøtz and Maklakoff tonometers.

  7. Surface transport processes in charged porous media

    DOE PAGES

    Gabitto, Jorge; Tsouris, Costas

    2017-03-03

    Surface transport processes are important in chemistry, colloidal sciences, engineering, biology, and geophysics. Natural or externally produced charges on surfaces create electrical double layers (EDLs) at the solid-liquid interface. The existence of the EDLs produces several complex processes including bulk and surface transport of ions. In this work, a model is presented to simulate bulk and transport processes in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations inmore » the limit of thin electrical double layers. Description of the EDL between the electrolyte solution and the charged wall is accomplished using the Gouy-Chapman-Stern (GCS) model. The surface transport terms enter into the average equations due to the use of boundary conditions for diffuse interfaces. Two extra surface transports terms appear in the closed average equations. One is a surface diffusion term equivalent to the transport process in non-charged porous media. The second surface transport term is a migration term unique to charged porous media. The effective bulk and transport parameters for isotropic porous media are calculated solving the corresponding closure problems.« less

  8. Surface transport processes in charged porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabitto, Jorge; Tsouris, Costas

    Surface transport processes are important in chemistry, colloidal sciences, engineering, biology, and geophysics. Natural or externally produced charges on surfaces create electrical double layers (EDLs) at the solid-liquid interface. The existence of the EDLs produces several complex processes including bulk and surface transport of ions. In this work, a model is presented to simulate bulk and transport processes in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations inmore » the limit of thin electrical double layers. Description of the EDL between the electrolyte solution and the charged wall is accomplished using the Gouy-Chapman-Stern (GCS) model. The surface transport terms enter into the average equations due to the use of boundary conditions for diffuse interfaces. Two extra surface transports terms appear in the closed average equations. One is a surface diffusion term equivalent to the transport process in non-charged porous media. The second surface transport term is a migration term unique to charged porous media. The effective bulk and transport parameters for isotropic porous media are calculated solving the corresponding closure problems.« less

  9. Health policies for the reduction of obstetric interventions in singleton full-term births in Catalonia.

    PubMed

    Pueyo, Maria-Jesus; Escuriet, Ramon; Pérez-Botella, M; de Molina, I; Ruíz-Berdun, D; Albert, S; Díaz, S; Torres-Capcha, P; Ortún, V

    2018-04-01

    To explore the effect of hospital's characteristics in the proportion of obstetric interventions (OI) performed in singleton fullterm births (SFTB) in Catalonia (2010-2014), while incentives were employed to reduce C-sections. Data about SFTB assisted at 42 public hospitals were extracted from the dataset of hospital discharges. Hospitals were classified according to the level of complexity, the volume of births attended, and the adoption of a non-medicalized delivery (NMD) strategy. The annual average change in the percentage for OI was calculated based on Poisson regression models. The rate of OI (35% of all SFTB) including C-sections (20.6%) remained stable through the period. Hospitals attending less complex cases had a lower average of OI, while hospitals attending lower volumes had the highest average. Higher levels of complexity increased the use of C-sections (+4% yearly) and forceps (+16%). The adoption of the NMD strategy decreased the rate of C-sections. The proportion of OI, including C-sections, remained stable in spite of public incentives to reduce them. The adoption of the NMD strategy could help in decreasing the rate of OI. To reduce the OI rate, new strategies should be launched as the development of low-risk pregnancies units, alignment of incentives and hospital payment, increased value of incentives and encouragement of a cultural shift towards non-medicalized births. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Time series trends of the safety effects of pavement resurfacing.

    PubMed

    Park, Juneyoung; Abdel-Aty, Mohamed; Wang, Jung-Han

    2017-04-01

    This study evaluated the safety performance of pavement resurfacing projects on urban arterials in Florida using the observational before and after approaches. The safety effects of pavement resurfacing were quantified in the crash modification factors (CMFs) and estimated based on different ranges of heavy vehicle traffic volume and time changes for different severity levels. In order to evaluate the variation of CMFs over time, crash modification functions (CMFunctions) were developed using nonlinear regression and time series models. The results showed that pavement resurfacing projects decrease crash frequency and are found to be more safety effective to reduce severe crashes in general. Moreover, the results of the general relationship between the safety effects and time changes indicated that the CMFs increase over time after the resurfacing treatment. It was also found that pavement resurfacing projects for the urban roadways with higher heavy vehicle volume rate are more safety effective than the roadways with lower heavy vehicle volume rate. Based on the exploration and comparison of the developed CMFucntions, the seasonal autoregressive integrated moving average (SARIMA) and exponential functional form of the nonlinear regression models can be utilized to identify the trend of CMFs over time. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady

    1990-01-01

    A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 2 is the User's Guide, and describes the program's general features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.

  12. An Euler-Lagrange method considering bubble radial dynamics for modeling sonochemical reactors.

    PubMed

    Jamshidi, Rashid; Brenner, Gunther

    2014-01-01

    Unsteady numerical computations are performed to investigate the flow field, wave propagation and the structure of bubbles in sonochemical reactors. The turbulent flow field is simulated using a two-equation Reynolds-Averaged Navier-Stokes (RANS) model. The distribution of the acoustic pressure is solved based on the Helmholtz equation using a finite volume method (FVM). The radial dynamics of a single bubble are considered by applying the Keller-Miksis equation to consider the compressibility of the liquid to the first order of acoustical Mach number. To investigate the structure of bubbles, a one-way coupling Euler-Lagrange approach is used to simulate the bulk medium and the bubbles as the dispersed phase. Drag, gravity, buoyancy, added mass, volume change and first Bjerknes forces are considered and their orders of magnitude are compared. To verify the implemented numerical algorithms, results for one- and two-dimensional simplified test cases are compared with analytical solutions. The results show good agreement with experimental results for the relationship between the acoustic pressure amplitude and the volume fraction of the bubbles. The two-dimensional axi-symmetric results are in good agreement with experimentally observed structure of bubbles close to sonotrode. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Regarding the Focal Treatment of Prostate Cancer: Inference of the Gleason Grade From Magnetic Resonance Spectroscopic Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brame, Ryan S.; Zaider, Marco; Zakian, Kristen L.

    2009-05-01

    Purpose: To quantify, as a function of average magnetic resonance spectroscopy (MRS) score and tumor volume, the probability that a cancer-suspected lesion has an elevated Gleason grade. Methods and Materials: The data consist of MRS imaging ratios R stratified by patient, lesion (contiguous abnormal voxels), voxels, biopsy and pathologic Gleason grade, and lesion volume. The data were analyzed using a logistic model. Results: For both low and high Gleason score biopsy lesions, the probability of pathologic Gleason score {>=}4+3 increases with lesion volume. At low values of R a lesion volume of at least 15-20 voxels is needed to reachmore » a probability of success of 80%; the biopsy result helps reduce the prediction uncertainty. At larger MRS ratios (R > 6) the biopsy result becomes essentially uninformative once the lesion volume is >12 voxels. With the exception of low values of R, for lesions with low Gleason score at biopsy, the MRS ratios serve primarily as a selection tool for assessing lesion volumes. Conclusions: In patients with biopsy Gleason score {>=}4+3, high MRS imaging tumor volume and (creatine + choline)/citrate ratio may justify the initiation of voxel-specific dose escalation. This is an example of biologically motivated focal treatment for which intensity-modulated radiotherapy and especially brachytherapy are ideally suited.« less

  14. Auditory attention in autism spectrum disorder: An exploration of volumetric magnetic resonance imaging findings.

    PubMed

    Lalani, Sanam J; Duffield, Tyler C; Trontel, Haley G; Bigler, Erin D; Abildskov, Tracy J; Froehlich, Alyson; Prigge, Molly B D; Travers, Brittany G; Anderson, Jeffrey S; Zielinski, Brandon A; Alexander, Andrew; Lange, Nicholas; Lainhart, Janet E

    2018-06-01

    Studies have shown that individuals with autism spectrum disorder (ASD) tend to perform significantly below typically developing individuals on standardized measures of attention, even when controlling for IQ. The current study sought to examine within ASD whether anatomical correlates of attention performance differed between those with average to above-average IQ (AIQ group) and those with low-average to borderline ability (LIQ group) as well as in comparison to typically developing controls (TDC). Using automated volumetric analyses, we examined regional volume of classic attention areas including the superior frontal gyrus, anterior cingulate cortex, and precuneus in ASD AIQ (n = 38) and LIQ (n = 18) individuals along with 30 TDC. Auditory attention performance was assessed using subtests of the Test of Memory and Learning (TOMAL) compared among the groups and then correlated with regional brain volumes. Analyses revealed group differences in attention. The three groups did not differ significantly on any auditory attention-related brain volumes; however, trends toward significant size-attention function interactions were observed. Negative correlations were found between the volume of the precuneus and auditory attention performance for the AIQ ASD group, indicating larger volume related to poorer performance. Implications for general attention functioning and dysfunctional neural connectivity in ASD are discussed.

  15. 7 CFR 1219.30 - Establishment and membership.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... volume of domestic production and the average volume of imports into the United States market over the..., the Board shall review the production of domestic Hass avocados in the United States and the volume of... reapportionment of the positions authorized in paragraph (b)(3) of this section to reflect changes in the...

  16. 40 CFR 60.463 - Performance test and compliance provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operator shall use the following procedures for determining monthly volume-weighted average emissions of... Method 24 or an equivalent or alternative method. The owner or operator shall determine the volume of... facilities, the owner or operator shall estimate the volume of coating used at each affected facility by...

  17. 40 CFR 60.313 - Performance tests and compliance provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... determining monthly volume-weighted average emissions of VOC's in kilograms per liter of coating solids... shall determine the volume of coating and the mass of VOC-solvent used for thinning purposes from... facility or serves both affected and existing facilities, the owner or operator shall estimate the volume...

  18. Utility of Early Post-operative High Resolution Volumetric MR Imaging after Transsphenoidal Pituitary Tumor Surgery

    PubMed Central

    Patel, Kunal S.; Kazam, Jacob; Tsiouris, Apostolos J.; Anand, Vijay K.; Schwartz, Theodore H.

    2014-01-01

    Objective Controversy exists over the utility of early post-operative magnetic resonance imaging (MRI) after transsphenoidal pituitary surgery for macroadenomas. We investigate whether valuable information can be derived from current higher resolution scans. Methods Volumetric MRI scans were obtained in the early (<10 days) and late (>30 days) post-operative periods in a series of patients undergoing transsphenoidal pituitary surgery. The volume of the residual tumor, resection cavity, and corresponding visual field tests were recorded at each time point. Statistical analyses of changes in tumor volume and cavity size were calculated using the late MRI as the gold standard. Results 40 patients met the inclusion criteria. Pre-operative tumor volume averaged 8.8 cm3. Early postoperative assessment of average residual tumor volume (1.18 cm3) was quite accurate and did not differ statistically from late post-operative volume (1.23 cm3, p=.64), indicating the utility of early scans to measure residual tumor. Early scans were 100% sensitive and 91% specific for predicting ≥ 98% resection (p<.001, Fisher’s exact test). The average percent decrease in cavity volume from pre-operative MRI (tumor volume) to early post-operative imaging was 45% with decreases in all but 3 patients. There was no correlation between the size of the early cavity and the visual outcome. Conclusions Early high resolution volumetric MRI is valuable in determining the presence or absence of residual tumor. Cavity volume almost always decreases after surgery and a lack of decrease should alert the surgeon to possible persistent compression of the optic apparatus that may warrant re-operation. PMID:25045791

  19. Asymptotic Dynamics of Self-driven Vehicles in a Closed Boundary

    NASA Astrophysics Data System (ADS)

    Lee, Chi-Lun; Huang, Chia-Ling

    2011-08-01

    We study the asymptotic dynamics of self-driven vehicles in a loop using a car-following model with the consideration of volume exclusions. In particular, we derive the dynamical steady states for the single-cluster case and obtain the corresponding fundamental diagrams, exhibiting two branches representative of entering and leaving the jam, respectively. By simulations we find that the speed average over all vehicles eventually reaches the same value, regardless of final clustering states. The autocorrelation functions for overall speed average and single-vehicle speed are studied, each revealing a unique time scale. We also discuss the role of noises in vehicular accelerations. Based on our observations we give trial definitions about the degree of chaoticity for general self-driven many-body systems.

  20. Evaluation of a knowledge-based planning solution for head and neck cancer.

    PubMed

    Tol, Jim P; Delaney, Alexander R; Dahele, Max; Slotman, Ben J; Verbakel, Wilko F A R

    2015-03-01

    Automated and knowledge-based planning techniques aim to reduce variations in plan quality. RapidPlan uses a library consisting of different patient plans to make a model that can predict achievable dose-volume histograms (DVHs) for new patients and uses those models for setting optimization objectives. We benchmarked RapidPlan versus clinical plans for 2 patient groups, using 3 different libraries. Volumetric modulated arc therapy plans of 60 recent head and neck cancer patients that included sparing of the salivary glands, swallowing muscles, and oral cavity were evenly divided between 2 models, Model(30A) and Model(30B), and were combined in a third model, Model60. Knowledge-based plans were created for 2 evaluation groups: evaluation group 1 (EG1), consisting of 15 recent patients, and evaluation group 2 (EG2), consisting of 15 older patients in whom only the salivary glands were spared. RapidPlan results were compared with clinical plans (CP) for boost and/or elective planning target volume homogeneity index, using HI(B)/HI(E) = 100 × (D2% - D98%)/D50%, and mean dose to composite salivary glands, swallowing muscles, and oral cavity (D(sal), D(swal), and D(oc), respectively). For EG1, RapidPlan improved HI(B) and HI(E) values compared with CP by 1.0% to 1.3% and 1.0% to 0.6%, respectively. Comparable D(sal) and D(swal) values were seen in Model(30A), Model(30B), and Model60, decreasing by an average of 0.1, 1.0, and 0.8 Gy and 4.8, 3.7, and 4.4 Gy, respectively. However, differences were noted between individual organs at risk (OARs), with Model(30B) increasing D(oc) by 0.1, 3.2, and 2.8 Gy compared with CP, Model(30A), and Model60. Plan quality was less consistent when the patient was flagged as an outlier. For EG2, RapidPlan decreased D(sal) by 4.1 to 4.9 Gy on average, whereas HI(B) and HI(E) decreased by 1.1% to 1.5% and 2.3% to 1.9%, respectively. RapidPlan knowledge-based treatment plans were comparable to CP if the patient's OAR-planning target volume geometry was within the range of those included in the models. EG2 results showed that a model including swallowing-muscle and oral-cavity sparing can be applied to patients with only salivary gland sparing. This may allow model library sharing between institutes. Optimal detection of inadequate plans and population of model libraries requires further investigation. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Quantitation of mandibular symphysis volume as a source of bone grafting.

    PubMed

    Verdugo, Fernando; Simonian, Krikor; Smith McDonald, Roberto; Nowzari, Hessam

    2010-06-01

    Autogenous intramembranous bone graft present several advantages such as minimal resorption and high concentration of bone morphogenetic proteins. A method for measuring the amount of bone that can be harvested from the symphysis area has not been reported in real patients. The aim of the present study was to intrasurgically quantitate the volume of the symphysis bone graft that can be safely harvested in live patients and compare it with AutoCAD (version 16.0, Autodesk, Inc., San Rafael, CA, USA) tomographic calculations. AutoCAD software program quantitated symphysis bone graft in 40 patients using computerized tomographies. Direct intrasurgical measurements were recorded thereafter and compared with AutoCAD data. The bone volume was measured at the recipient sites of a subgroup of 10 patients, 6 months post sinus augmentation. The volume of bone graft measured by AutoCAD averaged 1.4 mL (SD 0.6 mL, range: 0.5-2.7 mL). The volume of bone graft measured intrasurgically averaged 2.3 mL (SD 0.4 mL, range 1.7-2.8 mL). The statistical difference between the two measurement methods was significant. The bone volume measured at the recipient sites 6 months post sinus augmentation averaged 1.9 mL (SD 0.3 mL, range 1.3-2.6 mL) with a mean loss of 0.4 mL. AutoCAD did not overestimate the volume of bone that can be safely harvested from the mandibular symphysis. The use of the design software program may improve surgical treatment planning prior to sinus augmentation.

  2. Intraday Seasonalities and Nonstationarity of Trading Volume in Financial Markets: Individual and Cross-Sectional Features.

    PubMed

    Graczyk, Michelle B; Duarte Queirós, Sílvio M

    2016-01-01

    We study the intraday behaviour of the statistical moments of the trading volume of the blue chip equities that composed the Dow Jones Industrial Average index between 2003 and 2014. By splitting that time interval into semesters, we provide a quantitative account of the nonstationary nature of the intraday statistical properties as well. Explicitly, we prove the well-known ∪-shape exhibited by the average trading volume-as well as the volatility of the price fluctuations-experienced a significant change from 2008 (the year of the "subprime" financial crisis) onwards. That has resulted in a faster relaxation after the market opening and relates to a consistent decrease in the convexity of the average trading volume intraday profile. Simultaneously, the last part of the session has become steeper as well, a modification that is likely to have been triggered by the new short-selling rules that were introduced in 2007 by the Securities and Exchange Commission. The combination of both results reveals that the ∪ has been turning into a ⊔. Additionally, the analysis of higher-order cumulants-namely the skewness and the kurtosis-shows that the morning and the afternoon parts of the trading session are each clearly associated with different statistical features and hence dynamical rules. Concretely, we claim that the large initial trading volume is due to wayward stocks whereas the large volume during the last part of the session hinges on a cohesive increase of the trading volume. That dissimilarity between the two parts of the trading session is stressed in periods of higher uproar in the market.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooker, A.; Gonder, J.; Lopp, S.

    The Automotive Deployment Option Projection Tool (ADOPT) is a light-duty vehicle consumer choice and stock model supported by the U.S. Department of Energy’s Vehicle Technologies Office. It estimates technology improvement impacts on U.S. light-duty vehicles sales, petroleum use, and greenhouse gas emissions. ADOPT uses techniques from the multinomial logit method and the mixed logit method estimate sales. Specifically, it estimates sales based on the weighted value of key attributes including vehicle price, fuel cost, acceleration, range and usable volume. The average importance of several attributes changes nonlinearly across its range and changes with income. For several attributes, a distribution ofmore » importance around the average value is used to represent consumer heterogeneity. The majority of existing vehicle makes, models, and trims are included to fully represent the market. The Corporate Average Fuel Economy regulations are enforced. The sales feed into the ADOPT stock model. It captures key aspects for summing petroleum use and greenhouse gas emissions This includes capturing the change in vehicle miles traveled by vehicle age, the creation of new model options based on the success of existing vehicles, new vehicle option introduction rate limits, and survival rates by vehicle age. ADOPT has been extensively validated with historical sales data. It matches in key dimensions including sales by fuel economy, acceleration, price, vehicle size class, and powertrain across multiple years. A graphical user interface provides easy and efficient use. It manages the inputs, simulation, and results.« less

  4. Thomson scattering from a three-component plasma.

    PubMed

    Johnson, W R; Nilsen, J

    2014-02-01

    A model for a three-component plasma consisting of two distinct ionic species and electrons is developed and applied to study x-ray Thomson scattering. Ions of a specific type are assumed to be identical and are treated in the average-atom approximation. Given the plasma temperature and density, the model predicts mass densities, effective ionic charges, and cell volumes for each ionic type, together with the plasma chemical potential and free-electron density. Additionally, the average-atom treatment of individual ions provides a quantum-mechanical description of bound and continuum electrons. The model is used to obtain parameters needed to determine the dynamic structure factors for x-ray Thomson scattering from a three-component plasma. The contribution from inelastic scattering by free electrons is evaluated in the random-phase approximation. The contribution from inelastic scattering by bound electrons is evaluated using the bound-state and scattering wave functions obtained from the average-atom calculations. Finally, the partial static structure factors for elastic scattering by ions are evaluated using a two-component version of the Ornstein-Zernike equations with hypernetted chain closure, in which electron-ion interactions are accounted for using screened ion-ion interaction potentials. The model is used to predict the x-ray Thomson scattering spectrum from a CH plasma and the resulting spectrum is compared with experimental results obtained by Feltcher et al. [Phys. Plasmas 20, 056316 (2013)].

  5. Seismic source functions from free-field ground motions recorded on SPE: Implications for source models of small, shallow explosions

    NASA Astrophysics Data System (ADS)

    Rougier, Esteban; Patton, Howard J.

    2015-05-01

    Reduced displacement potentials (RDPs) for chemical explosions of the Source Physics Experiments (SPE) in granite at the Nevada Nuclear Security Site are estimated from free-field ground motion recordings. Far-field P wave source functions are proportional to the time derivative of RDPs. Frequency domain comparisons between measured source functions and model predictions show that high-frequency amplitudes roll off as ω- 2, but models fail to predict the observed seismic moment, corner frequency, and spectral overshoot. All three features are fit satisfactorily for the SPE-2 test after cavity radius Rc is reduced by 12%, elastic radius is reduced by 58%, and peak-to-static pressure ratio on the elastic radius is increased by 100%, all with respect to the Mueller-Murphy model modified with the Denny-Johnson Rc scaling law. A large discrepancy is found between the cavity volume inferred from RDPs and the volume estimated from laser scans of the emplacement hole. The measurements imply a scaled Rc of ~5 m/kt1/3, more than a factor of 2 smaller than nuclear explosions. Less than 25% of the seismic moment can be attributed to cavity formation. A breakdown of the incompressibility assumption due to shear dilatancy of the source medium around the cavity is the likely explanation. New formulas are developed for volume changes due to medium bulking (or compaction). A 0.04% decrease of average density inside the elastic radius accounts for the missing volumetric moment. Assuming incompressibility, established Rc scaling laws predicted the moment reasonable well, but it was only fortuitous because dilation of the source medium compensated for the small cavity volume.

  6. SToRM: A numerical model for environmental surface flows

    USGS Publications Warehouse

    Simoes, Francisco J.

    2009-01-01

    SToRM (System for Transport and River Modeling) is a numerical model developed to simulate free surface flows in complex environmental domains. It is based on the depth-averaged St. Venant equations, which are discretized using unstructured upwind finite volume methods, and contains both steady and unsteady solution techniques. This article provides a brief description of the numerical approach selected to discretize the governing equations in space and time, including important aspects of solving natural environmental flows, such as the wetting and drying algorithm. The presentation is illustrated with several application examples, covering both laboratory and natural river flow cases, which show the model’s ability to solve complex flow phenomena.

  7. A water balance model to estimate flow through the Old and Middle River corridor

    USGS Publications Warehouse

    Andrews, Stephen W.; Gross, Edward S.; Hutton, Paul H.

    2016-01-01

    We applied a water balance model to predict tidally averaged (subtidal) flows through the Old River and Middle River corridor in the Sacramento–San Joaquin Delta. We reviewed the dynamics that govern subtidal flows and water levels and adopted a simplified representation. In this water balance approach, we estimated ungaged flows as linear functions of known (or specified) flows. We assumed that subtidal storage within the control volume varies because of fortnightly variation in subtidal water level, Delta inflow, and barometric pressure. The water balance model effectively predicts subtidal flows and approaches the accuracy of a 1–D Delta hydrodynamic model. We explore the potential to improve the approach by representing more complex dynamics and identify possible future improvements.

  8. Combustion chamber analysis code

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-01-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  9. Influence of external forcings on abrupt millennial-scale climate changes: a statistical modelling study

    NASA Astrophysics Data System (ADS)

    Mitsui, Takahito; Crucifix, Michel

    2017-04-01

    The last glacial period was punctuated by a series of abrupt climate shifts, the so-called Dansgaard-Oeschger (DO) events. The frequency of DO events varied in time, supposedly because of changes in background climate conditions. Here, the influence of external forcings on DO events is investigated with statistical modelling. We assume two types of simple stochastic dynamical systems models (double-well potential-type and oscillator-type), forced by the northern hemisphere summer insolation change and/or the global ice volume change. The model parameters are estimated by using the maximum likelihood method with the NGRIP Ca^{2+} record. The stochastic oscillator model with at least the ice volume forcing reproduces well the sample autocorrelation function of the record and the frequency changes of warming transitions in the last glacial period across MISs 2, 3, and 4. The model performance is improved with the additional insolation forcing. The BIC scores also suggest that the ice volume forcing is relatively more important than the insolation forcing, though the strength of evidence depends on the model assumption. Finally, we simulate the average number of warming transitions in the past four glacial periods, assuming the model can be extended beyond the last glacial, and compare the result with an Iberian margin sea-surface temperature (SST) record (Martrat et al. in Science 317(5837): 502-507, 2007). The simulation result supports the previous observation that abrupt millennial-scale climate changes in the penultimate glacial (MIS 6) are less frequent than in the last glacial (MISs 2-4). On the other hand, it suggests that the number of abrupt millennial-scale climate changes in older glacial periods (MISs 6, 8, and 10) might be larger than inferred from the SST record.

  10. Probability density function of non-reactive solute concentration in heterogeneous porous formations.

    PubMed

    Bellin, Alberto; Tonina, Daniele

    2007-10-30

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.

  11. Horizontal ground deformation patterns and magma storage during the Puu Oo eruption of Kilauea volcano, Hawaii: episodes 22-42

    USGS Publications Warehouse

    Hoffmann, J.P.; Ulrich, G.E.; Garcia, M.O.

    1990-01-01

    Horizontal ground deformation measurements were made repeatedly with an electronic distance meter near the Puu Oo eruption site approximately perpendicular to Kilauea's east rift zone (ERZ) before and after eruptive episodes 22-42. Line lengths gradually extended during repose periods and rapidly contracted about the same amount following eruptions. The repeated extension and contraction of the measured lines are best explained by the elastic response of the country rock to the addition and subsequent eruption of magma from a local reservoir. The deformation patterns are modeled to constrain the geometry and location of the local reservoir near Puu Oo. The observed deformation is consistent with deformation patterns that would be produced by the expansion of a shallow, steeply dipping dike just uprift of Puu Oo striking parallel to the trend of the ERZ. The modeled dike is centered about 800 m uprift of Puu Oo. Its top is at a depth of 0.4 km, its bottom at about 2.9 km, and the length is about 1.6 km; the dike strikes N65?? E and dips at about 87??SE. The model indicates that the dike expanded by 11 cm during repose periods, for an average volumetric expansion of nearly 500 000 m3. The volume of magma added to the dike during repose periods was variable but correlates positively with the volume of erupted lava of the subsequent eruption and represents about 8% of the new lava extruded. Dike geometry and expansion values are used to estimate the pressure increase near the eruption site due to the accumulation of magma during repose periods. On average, vent pressures increased by about 0.38 MPa during the repose periods, one-third of the pressure increase at the summit. The model indicates that the dikelike body below Puu Oo grew in volume from 3 million cubic meters (Mm3) to about 10-12 Mm3 during the series of eruptions. The width of this body was probably about 2.5-3.0 m. No net long-term deformation was detected along the measured deformation lines. ?? 1990 Springer-Verlag.

  12. A mathematical model to optimize the drain phase in gravity-based peritoneal dialysis systems.

    PubMed

    Akonur, Alp; Lo, Ying-Cheng; Cizman, Borut

    2010-01-01

    Use of patient-specific drain-phase parameters has previously been suggested to improve peritoneal dialysis (PD) adequacy. Improving management of the drain period may also help to minimize intraperitoneal volume (IPV). A typical gravity-based drain profile consists of a relatively constant initial fast-flow period, followed by a transition period and a decaying slow-flow period. That profile was modeled using the equation VD(t) = (V(D0) - Q(MAX) x t) xphi + (V(D0) x e(-alphat)) x (1 - phi), where V(D)(t) is the time-dependent dialysate volume; V(D0), the dialysate volume at the start of the drain; Q(MAX), the maximum drain flow rate; alpha, the exponential drain constant; and phi, the unit step function with respect to the flow transition. We simulated the effects of the assumed patient-specific maximum drain flow (Q(MAX)) and transition volume (psi), and the peritoneal volume percentage when transition occurs,for fixed device-specific drain parameters. Average patient transport parameters were assumed during 5-exchange therapy with 10 L of PD solution. Changes in therapy performance strongly depended on the drain parameters. Comparing 400 mL/85% with 200 mL/65% (Q(MAX/psi), drain time (7.5 min vs. 13.5 min) and IPV (2769 mL vs. 2355 mL) increased when the initial drain flow was low and the transition quick. Ultrafiltration and solute clearances remained relatively similar. Such differences were augmented up to a drain time of 22 minutes and an IPV of more than 3 L when Q(MAX) was 100 mL/min. The ability to model individual drain conditions together with water and solute transport may help to prevent patient discomfort with gravity-based PD. However, it is essential to note that practical difficulties such as displaced catheters and obstructed flow paths cause variability in drain characteristics even for the same patient, limiting the clinical applicability of this model.

  13. From Modeling of Plasticity in Single-Crystal Superalloys to High-Resolution X-rays Three-Crystal Diffractometer Peaks Simulation

    NASA Astrophysics Data System (ADS)

    Jacques, Alain

    2016-12-01

    The dislocation-based modeling of the high-temperature creep of two-phased single-crystal superalloys requires input data beyond strain vs time curves. This may be obtained by use of in situ experiments combining high-temperature creep tests with high-resolution synchrotron three-crystal diffractometry. Such tests give access to changes in phase volume fractions and to the average components of the stress tensor in each phase as well as the plastic strain of each phase. Further progress may be obtained by a new method making intensive use of the Fast Fourier Transform, and first modeling the behavior of a representative volume of material (stress fields, plastic strain, dislocation densities…), then simulating directly the corresponding diffraction peaks, taking into account the displacement field within the material, chemical variations, and beam coherence. Initial tests indicate that the simulated peak shapes are close to the experimental ones and are quite sensitive to the details of the microstructure and to dislocation densities at interfaces and within the soft γ phase.

  14. Pulmonary parenchyma segmentation in thin CT image sequences with spectral clustering and geodesic active contour model based on similarity

    NASA Astrophysics Data System (ADS)

    He, Nana; Zhang, Xiaolong; Zhao, Juanjuan; Zhao, Huilan; Qiang, Yan

    2017-07-01

    While the popular thin layer scanning technology of spiral CT has helped to improve diagnoses of lung diseases, the large volumes of scanning images produced by the technology also dramatically increase the load of physicians in lesion detection. Computer-aided diagnosis techniques like lesions segmentation in thin CT sequences have been developed to address this issue, but it remains a challenge to achieve high segmentation efficiency and accuracy without much involvement of human manual intervention. In this paper, we present our research on automated segmentation of lung parenchyma with an improved geodesic active contour model that is geodesic active contour model based on similarity (GACBS). Combining spectral clustering algorithm based on Nystrom (SCN) with GACBS, this algorithm first extracts key image slices, then uses these slices to generate an initial contour of pulmonary parenchyma of un-segmented slices with an interpolation algorithm, and finally segments lung parenchyma of un-segmented slices. Experimental results show that the segmentation results generated by our method are close to what manual segmentation can produce, with an average volume overlap ratio of 91.48%.

  15. Smart licensing and environmental flows: Modeling framework and sensitivity testing

    NASA Astrophysics Data System (ADS)

    Wilby, R. L.; Fenn, C. R.; Wood, P. J.; Timlett, R.; Lequesne, T.

    2011-12-01

    Adapting to climate change is just one among many challenges facing river managers. The response will involve balancing the long-term water demands of society with the changing needs of the environment in sustainable and cost effective ways. This paper describes a modeling framework for evaluating the sensitivity of low river flows to different configurations of abstraction licensing under both historical climate variability and expected climate change. A rainfall-runoff model is used to quantify trade-offs among environmental flow (e-flow) requirements, potential surface and groundwater abstraction volumes, and the frequency of harmful low-flow conditions. Using the River Itchen in southern England as a case study it is shown that the abstraction volume is more sensitive to uncertainty in the regional climate change projection than to the e-flow target. It is also found that "smarter" licensing arrangements (involving a mix of hands off flows and "rising block" abstraction rules) could achieve e-flow targets more frequently than conventional seasonal abstraction limits, with only modest reductions in average annual yield, even under a hotter, drier climate change scenario.

  16. Engine Hydraulic Stability. [injector model for analyzing combustion instability

    NASA Technical Reports Server (NTRS)

    Kesselring, R. C.; Sprouse, K. M.

    1977-01-01

    An analytical injector model was developed specifically to analyze combustion instability coupling between the injector hydraulics and the combustion process. This digital computer dynamic injector model will, for any imposed chamber of inlet pressure profile with a frequency ranging from 100 to 3000 Hz (minimum) accurately predict/calculate the instantaneous injector flowrates. The injector system is described in terms of which flow segments enter and leave each pressure node. For each flow segment, a resistance, line lengths, and areas are required as inputs (the line lengths and areas are used in determining inertance). For each pressure node, volume and acoustic velocity are required as inputs (volume and acoustic velocity determine capacitance). The geometric criteria for determining inertances of flow segments and capacitance of pressure nodes was set. Also, a technique was developed for analytically determining time averaged steady-state pressure drops and flowrates for every flow segment in an injector when such data is not known. These pressure drops and flowrates are then used in determining the linearized flow resistance for each line segment of flow.

  17. SU-D-201-07: Exploring the Utility of 4D FDG-PET/CT Scans in Design of Radiation Therapy Planning Compared with 3D PET/CT: A Prospective Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, C; Yin, Y

    2015-06-15

    Purpose: A method using four-dimensional(4D) PET/CT in design of radiation treatment planning was proposed and the target volume and radiation dose distribution changes relative to standard three-dimensional (3D) PET/CT were examined. Methods: A target deformable registration method was used by which the whole patient’s respiration process was considered and the effect of respiration motion was minimized when designing radiotherapy planning. The gross tumor volume of a non-small-cell lung cancer was contoured on the 4D FDG-PET/CT and 3D PET/CT scans by use of two different techniques: manual contouring by an experienced radiation oncologist using a predetermined protocol; another technique using amore » constant threshold of standardized uptake value (SUV) greater than 2.5. The target volume and radiotherapy dose distribution between VOL3D and VOL4D were analyzed. Results: For all phases, the average automatic and manually GTV volume was 18.61 cm3 (range, 16.39–22.03 cm3) and 31.29 cm3 (range, 30.11–35.55 cm3), respectively. The automatic and manually volume of merged IGTV were 27.82 cm3 and 49.37 cm3, respectively. For the manual contour, compared to 3D plan the mean dose for the left, right, and total lung of 4D plan have an average decrease 21.55%, 15.17% and 15.86%, respectively. The maximum dose of spinal cord has an average decrease 2.35%. For the automatic contour, the mean dose for the left, right, and total lung have an average decrease 23.48%, 16.84% and 17.44%, respectively. The maximum dose of spinal cord has an average decrease 1.68%. Conclusion: In comparison to 3D PET/CT, 4D PET/CT may better define the extent of moving tumors and reduce the contouring tumor volume thereby optimize radiation treatment planning for lung tumors.« less

  18. Supernovae driven turbulence in the interstellar medium

    NASA Astrophysics Data System (ADS)

    Gent, Frederick A.

    2012-11-01

    I model the multi-phase interstellar medium (ISM) randomly heated and shocked by supernovae (SN), with gravity, differential rotation and other parameters we understand to be typical of the solar neighbourhood. The simulations are in a 3D domain extending horizontally 1x1 kpc^2 and vertically 2 kpc, symmetric about the galactic mid-plane. They routinely span gas number densities 10^{-5}-10^2 cm^{-3}, temperatures 10-10^8 K, speeds up to 10^3 km s^{-1} and Mach number up to 25. Radiative cooling is applied from two widely adopted parameterizations, and compared directly to assess the sensitivity of the results to cooling. There is strong evidence to describe the ISM as comprising well defined cold, warm and hot regions, typified by T 10^2 ; 10^4 and 10^6 K, which are statistically close to thermal and total pressure equilibrium. This result is not sensitive to the choice of parameters considered here. The distribution of the gas density within each can be robustly modelled as lognormal. Appropriate distinction is required between the properties of the gases in the supernova active mid-plane and the more homogeneous phases outside this region. The connection between the fractional volume of a phase and its various proxies is clarified. An exact relation is then derived between the fractional volume and the filling factors defined in terms of the volume and probabilistic averages. These results are discussed in both observational and computational contexts. The correlation scale of the random flows is calculated from the velocity autocorrelation function; it is of order 100 pc and tends to grow with distance from the mid-plane. The origin and structure of the magnetic fields in the ISM is also investigated in nonideal MHD simulations. A seed magnetic field, with volume average of roughly 4 nG, grows exponentially to reach a statistically steady state within 1.6 Gyr. Following Germano (1992), volume averaging is applied with a Gaussian kernel to separate magnetic field into a mean field and fluctuations. Such averaging does not satisfy all Reynolds rules, yet allows a formulation of mean-field theory. The mean field thus obtained varies in both space and time. Growth rates differ for the mean-field and fluctuating field and there is clear scale separation between the two elements, whose integral scales are about 0.7 kpc and 0.3 kpc, respectively. Analysis of the dependence of the dynamo on rotation, shear and SN rate is used to clarify its mean and fluctuating contributions. The resulting magnetic field is quadrupolar, symmetric about the mid-plane, with strong positive azimuthal and weak negative radial orientation. Contrary to conventional wisdom, the mean field strength increases away from the mid-plane, peaking outside the SN active region at |z| < 300 pc. The strength of the field is strongly dependent on density, and in particular the mean field is mainly organised in the warm gas, locally very strong in the cold gas, but almost absent in the hot gas. The field in the hot gas is weak and dominated by fluctuations.

  19. Exposure-Response Analysis of Micafungin in Neonatal Candidiasis: Pooled Analysis of Two Clinical Trials.

    PubMed

    Kovanda, Laura L; Walsh, Thomas J; Benjamin, Daniel K; Arrieta, Antonio; Kaufman, David A; Smith, P Brian; Manzoni, Paolo; Desai, Amit V; Kaibara, Atsunori; Bonate, Peter L; Hope, William W

    2018-06-01

    Neonatal candidiasis causes significant morbidity and mortality in high risk infants. The micafungin dosage regimen of 10 mg/kg established for the treatment of neonatal candidiasis is based on a laboratory animal model of neonatal hematogenous Candida meningoencephalitis and pharmacokinetic (PK)-pharmacodynamic (PD) bridging studies. However, little is known about the how these PK-PD data translate clinically. Micafungin plasma concentrations from infants were used to construct a population PK model using Pmetrics software. Bayesian posterior estimates for infants with invasive candidiasis were used to evaluate the relationship between drug exposure and mycologic response using logistic regression. Sixty-four infants 3-119 days of age were included, of which 29 (45%) infants had invasive candidiasis. A 2-compartment PK model fits the data well. Allometric scaling was applied to clearance and volume normalized to the mean population weight (kg). The mean (standard deviation) estimates for clearance and volume in the central compartment were 0.07 (0.05) L/h/1.8 kg and 0.61 (0.53) L/1.8 kg, respectively. No relationship between average daily area under concentration-time curve or average daily area under concentration-time curve:minimum inhibitory concentration ratio and mycologic response was demonstrated (P > 0.05). Although not statistically significant, mycologic response was numerically higher when area under concentration-time curves were at or above the PD target. While a significant exposure-response relationship was not found, PK-PD experiments support higher exposures of micafungin in infants with invasive candidiasis. More patients would clarify this relationship; however, low incidence deters the feasibility of these studies.

  20. A GIS planning model for urban oil spill management.

    PubMed

    Li, J

    2001-01-01

    Oil spills in industrialized cities pose a significant threat to their urban water environment. The largest city in Canada, the city of Toronto, has an average 300-500 oil spills per year with an average total volume of about 160,000 L/year. About 45% of the spills was eventually cleaned up. Given the enormous amount of remaining oil entering into the fragile urban ecosystem, it is important to develop an effective pollution prevention and control plan for the city. A Geographic Information System (GIS) planning model has been developed to characterize oil spills and determine preventive and control measures available in the city. A database of oil spill records from 1988 to 1997 was compiled and geo-referenced. Attributes to each record such as spill volume, oil type, location, road type, sector, source, cleanup percentage, and environmental impacts were created. GIS layers of woodlots, wetlands, watercourses, Environmental Sensitive Areas, and Areas of Natural and Scientific Interest were obtained from the local Conservation Authority. By overlaying the spill characteristics with the GIS layers, evaluation of preventive and control solutions close to these environmental features was conducted. It was found that employee training and preventive maintenance should be improved as the principal cause of spills was attributed to human errors and equipment failure. Additionally, the cost of using oil separators at strategic spill locations was found to be $1.4 million. The GIS model provides an efficient planning tool for urban oil spill management. Additionally, the graphical capability of GIS allows users to integrate environmental features and spill characteristics in the management analysis.

Top