Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald
2016-01-01
Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.
CD volume design and verification
NASA Technical Reports Server (NTRS)
Li, Y. P.; Hughes, J. S.
1993-01-01
In this paper, we describe a prototype for CD-ROM volume design and verification. This prototype allows users to create their own model of CD volumes by modifying a prototypical model. Rule-based verification of the test volumes can then be performed later on against the volume definition. This working prototype has proven the concept of model-driven rule-based design and verification for large quantity of data. The model defined for the CD-ROM volumes becomes a data model as well as an executable specification.
1987-07-01
Groundwater." Developments in Industrial Microbiology, Volume 24, pp. 225-234. Society of Industrial Microbiology, Arlington, Virginia. 18. Product ...ESL-TR-85-52 cv) VOLUME II CN IN SITU BIOLOGICAL TREATMENT TEST AT KELLY AIR FORCE BASE, VOLUME !1: FIELD TEST RESULTS AND COST MODEL R.S. WETZEL...Kelly Air Force Base, Volume II: Field Test Results and Cost Model (UNCLASSIFIED) 12 PERSONAL AUTHOR(S) Roger S. Wetzel, Connie M. Durst, Donald H
A dynamical system of deposit and loan volumes based on the Lotka-Volterra model
NASA Astrophysics Data System (ADS)
Sumarti, N.; Nurfitriyana, R.; Nurwenda, W.
2014-02-01
In this research, we proposed a dynamical system of deposit and loan volumes of a bank using a predator-prey paradigm, where the predator is loan volumes, and the prey is deposit volumes. The existence of loan depends on the existence of deposit because the bank will allocate the loan volume from a portion of the deposit volume. The dynamical systems have been constructed are a simple model, a model with Michaelis-Menten Response and a model with the Reserve Requirement. Equilibria of the systems are analysed whether they are stable or unstable based on their linearised system.
Jiao, Y; Chen, R; Ke, X; Cheng, L; Chu, K; Lu, Z; Herskovits, E H
2011-01-01
Autism spectrum disorder (ASD) is a neurodevelopmental disorder, of which Asperger syndrome and high-functioning autism are subtypes. Our goal is: 1) to determine whether a diagnostic model based on single-nucleotide polymorphisms (SNPs), brain regional thickness measurements, or brain regional volume measurements can distinguish Asperger syndrome from high-functioning autism; and 2) to compare the SNP, thickness, and volume-based diagnostic models. Our study included 18 children with ASD: 13 subjects with high-functioning autism and 5 subjects with Asperger syndrome. For each child, we obtained 25 SNPs for 8 ASD-related genes; we also computed regional cortical thicknesses and volumes for 66 brain structures, based on structural magnetic resonance (MR) examination. To generate diagnostic models, we employed five machine-learning techniques: decision stump, alternating decision trees, multi-class alternating decision trees, logistic model trees, and support vector machines. For SNP-based classification, three decision-tree-based models performed better than the other two machine-learning models. The performance metrics for three decision-tree-based models were similar: decision stump was modestly better than the other two methods, with accuracy = 90%, sensitivity = 0.95 and specificity = 0.75. All thickness and volume-based diagnostic models performed poorly. The SNP-based diagnostic models were superior to those based on thickness and volume. For SNP-based classification, rs878960 in GABRB3 (gamma-aminobutyric acid A receptor, beta 3) was selected by all tree-based models. Our analysis demonstrated that SNP-based classification was more accurate than morphometry-based classification in ASD subtype classification. Also, we found that one SNP--rs878960 in GABRB3--distinguishes Asperger syndrome from high-functioning autism.
Models for predicting the mass of lime fruits by some engineering properties.
Miraei Ashtiani, Seyed-Hassan; Baradaran Motie, Jalal; Emadi, Bagher; Aghkhani, Mohammad-Hosein
2014-11-01
Grading fruits based on mass is important in packaging and reduces the waste, also increases the marketing value of agricultural produce. The aim of this study was mass modeling of two major cultivars of Iranian limes based on engineering attributes. Models were classified into three: 1-Single and multiple variable regressions of lime mass and dimensional characteristics. 2-Single and multiple variable regressions of lime mass and projected areas. 3-Single regression of lime mass based on its actual volume and calculated volume assumed as ellipsoid and prolate spheroid shapes. All properties considered in the current study were found to be statistically significant (ρ < 0.01). The results indicated that mass modeling of lime based on minor diameter and first projected area are the most appropriate models in the first and the second classifications, respectively. In third classification, the best model was obtained on the basis of the prolate spheroid volume. It was finally concluded that the suitable grading system of lime mass is based on prolate spheroid volume.
[Modeling and analysis of volume conduction based on field-circuit coupling].
Tang, Zhide; Liu, Hailong; Xie, Xiaohui; Chen, Xiufa; Hou, Deming
2012-08-01
Numerical simulations of volume conduction can be used to analyze the process of energy transfer and explore the effects of some physical factors on energy transfer efficiency. We analyzed the 3D quasi-static electric field by the finite element method, and developed A 3D coupled field-circuit model of volume conduction basing on the coupling between the circuit and the electric field. The model includes a circuit simulation of the volume conduction to provide direct theoretical guidance for energy transfer optimization design. A field-circuit coupling model with circular cylinder electrodes was established on the platform of the software FEM3.5. Based on this, the effects of electrode cross section area, electrode distance and circuit parameters on the performance of volume conduction system were obtained, which provided a basis for optimized design of energy transfer efficiency.
Optimized volume models of earthquake-triggered landslides
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-01-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212
Optimized volume models of earthquake-triggered landslides.
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-07-12
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.
ERIC Educational Resources Information Center
Schalock, H. Del, Ed.; Hale, James R., Ed.
This main volume (SP 002 155-SP 002 180 comprise the appendixes to this volume) explains the ComField (competency based, field centered) Model--a systems approach to the education of elementary school teachers which entails specifications (1) for instruction and (2) for management of the instructional program. In an overview, the ComField Model is…
Roshani, G H; Karami, A; Salehizadeh, A; Nazemi, E
2017-11-01
The problem of how to precisely measure the volume fractions of oil-gas-water mixtures in a pipeline remains as one of the main challenges in the petroleum industry. This paper reports the capability of Radial Basis Function (RBF) in forecasting the volume fractions in a gas-oil-water multiphase system. Indeed, in the present research, the volume fractions in the annular three-phase flow are measured based on a dual energy metering system including the 152 Eu and 137 Cs and one NaI detector, and then modeled by a RBF model. Since the summation of volume fractions are constant (equal to 100%), therefore it is enough for the RBF model to forecast only two volume fractions. In this investigation, three RBF models are employed. The first model is used to forecast the oil and water volume fractions. The next one is utilized to forecast the water and gas volume fractions, and the last one to forecast the gas and oil volume fractions. In the next stage, the numerical data obtained from MCNP-X code must be introduced to the RBF models. Then, the average errors of these three models are calculated and compared. The model which has the least error is picked up as the best predictive model. Based on the results, the best RBF model, forecasts the oil and water volume fractions with the mean relative error of less than 0.5%, which indicates that the RBF model introduced in this study ensures an effective enough mechanism to forecast the results. Copyright © 2017 Elsevier Ltd. All rights reserved.
The theory and programming of statistical tests for evaluating the Real-Time Air-Quality Model (RAM) using the Regional Air Pollution Study (RAPS) data base are fully documented in four report volumes. Moreover, the tests are generally applicable to other model evaluation problem...
QSPR modeling: graph connectivity indices versus line graph connectivity indices
Basak; Nikolic; Trinajstic; Amic; Beslo
2000-07-01
Five QSPR models of alkanes were reinvestigated. Properties considered were molecular surface-dependent properties (boiling points and gas chromatographic retention indices) and molecular volume-dependent properties (molar volumes and molar refractions). The vertex- and edge-connectivity indices were used as structural parameters. In each studied case we computed connectivity indices of alkane trees and alkane line graphs and searched for the optimum exponent. Models based on indices with an optimum exponent and on the standard value of the exponent were compared. Thus, for each property we generated six QSPR models (four for alkane trees and two for the corresponding line graphs). In all studied cases QSPR models based on connectivity indices with optimum exponents have better statistical characteristics than the models based on connectivity indices with the standard value of the exponent. The comparison between models based on vertex- and edge-connectivity indices gave in two cases (molar volumes and molar refractions) better models based on edge-connectivity indices and in three cases (boiling points for octanes and nonanes and gas chromatographic retention indices) better models based on vertex-connectivity indices. Thus, it appears that the edge-connectivity index is more appropriate to be used in the structure-molecular volume properties modeling and the vertex-connectivity index in the structure-molecular surface properties modeling. The use of line graphs did not improve the predictive power of the connectivity indices. Only in one case (boiling points of nonanes) a better model was obtained with the use of line graphs.
Koyama, Kazuya; Mitsumoto, Takuya; Shiraishi, Takahiro; Tsuda, Keisuke; Nishiyama, Atsushi; Inoue, Kazumasa; Yoshikawa, Kyosan; Hatano, Kazuo; Kubota, Kazuo; Fukushi, Masahiro
2017-09-01
We aimed to determine the difference in tumor volume associated with the reconstruction model in positron-emission tomography (PET). To reduce the influence of the reconstruction model, we suggested a method to measure the tumor volume using the relative threshold method with a fixed threshold based on peak standardized uptake value (SUV peak ). The efficacy of our method was verified using 18 F-2-fluoro-2-deoxy-D-glucose PET/computed tomography images of 20 patients with lung cancer. The tumor volume was determined using the relative threshold method with a fixed threshold based on the SUV peak . The PET data were reconstructed using the ordered-subset expectation maximization (OSEM) model, the OSEM + time-of-flight (TOF) model, and the OSEM + TOF + point-spread function (PSF) model. The volume differences associated with the reconstruction algorithm (%VD) were compared. For comparison, the tumor volume was measured using the relative threshold method based on the maximum SUV (SUV max ). For the OSEM and TOF models, the mean %VD values were -0.06 ± 8.07 and -2.04 ± 4.23% for the fixed 40% threshold according to the SUV max and the SUV peak, respectively. The effect of our method in this case seemed to be minor. For the OSEM and PSF models, the mean %VD values were -20.41 ± 14.47 and -13.87 ± 6.59% for the fixed 40% threshold according to the SUV max and SUV peak , respectively. Our new method enabled the measurement of tumor volume with a fixed threshold and reduced the influence of the changes in tumor volume associated with the reconstruction model.
Terlier, T; Lee, J; Lee, K; Lee, Y
2018-02-06
Technological progress has spurred the development of increasingly sophisticated analytical devices. The full characterization of structures in terms of sample volume and composition is now highly complex. Here, a highly improved solution for 3D characterization of samples, based on an advanced method for 3D data correction, is proposed. Traditionally, secondary ion mass spectrometry (SIMS) provides the chemical distribution of sample surfaces. Combining successive sputtering with 2D surface projections enables a 3D volume rendering to be generated. However, surface topography can distort the volume rendering by necessitating the projection of a nonflat surface onto a planar image. Moreover, the sputtering is highly dependent on the probed material. Local variation of composition affects the sputter yield and the beam-induced roughness, which in turn alters the 3D render. To circumvent these drawbacks, the correlation of atomic force microscopy (AFM) with SIMS has been proposed in previous studies as a solution for the 3D chemical characterization. To extend the applicability of this approach, we have developed a methodology using AFM-time-of-flight (ToF)-SIMS combined with an empirical sputter model, "dynamic-model-based volume correction", to universally correct 3D structures. First, the simulation of 3D structures highlighted the great advantages of this new approach compared with classical methods. Then, we explored the applicability of this new correction to two types of samples, a patterned metallic multilayer and a diblock copolymer film presenting surface asperities. In both cases, the dynamic-model-based volume correction produced an accurate 3D reconstruction of the sample volume and composition. The combination of AFM-SIMS with the dynamic-model-based volume correction improves the understanding of the surface characteristics. Beyond the useful 3D chemical information provided by dynamic-model-based volume correction, the approach permits us to enhance the correlation of chemical information from spectroscopic techniques with the physical properties obtained by AFM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y; Lee, CG; Chan, TCY
2014-06-15
Purpose: To develop mathematical models of tumor geometry changes under radiotherapy that may support future adaptive paradigms. Methods: A total of 29 cervical patients were scanned using MRI, once for planning and weekly thereafter for treatment monitoring. Using the tumor volumes contoured by a radiologist, three mathematical models were investigated based on the assumption of a stochastic process of tumor evolution. The “weekly MRI” model predicts tumor geometry for the following week from the last two consecutive MRI scans, based on the voxel transition probability. The other two models use only the first pair of consecutive MRI scans, and themore » transition probabilities were estimated via tumor type classified from the entire data set. The classification is based on either measuring the tumor volume (the “weekly volume” model), or implementing an auxiliary “Markov chain” model. These models were compared to a constant volume approach that represents the current clinical practice, using various model parameters; e.g., the threshold probability β converts the probability map into a tumor shape (larger threshold implies smaller tumor). Model performance was measured using volume conformity index (VCI), i.e., the union of the actual target and modeled target volume squared divided by product of these two volumes. Results: The “weekly MRI” model outperforms the constant volume model by 26% on average, and by 103% for the worst 10% of cases in terms of VCI under a wide range of β. The “weekly volume” and “Markov chain” models outperform the constant volume model by 20% and 16% on average, respectively. They also perform better than the “weekly MRI” model when β is large. Conclusion: It has been demonstrated that mathematical models can be developed to predict tumor geometry changes for cervical cancer undergoing radiotherapy. The models can potentially support adaptive radiotherapy paradigm by reducing normal tissue dose. This research was supported in part by the Ontario Consortium for Adaptive Interventions in Radiation Oncology (OCAIRO) funded by the Ontario Research Fund (ORF) and the MITACS Accelerate Internship Program.« less
The theory and programming of statistical tests for evaluating the Real-Time Air-Quality Model (RAM) using the Regional Air Pollution Study (RAPS) data base are fully documented in four volumes. Moreover, the tests are generally applicable to other model evaluation problems. Volu...
The theory and programming of statistical tests for evaluating the Real-Time Air-Quality Model (RAM) using the Regional Air Pollution Study (RAPS) data base are fully documented in four volumes. Moreover, the tests are generally applicable to other model evaluation problems. Volu...
Relation Between the Cell Volume and the Cell Cycle Dynamics in Mammalian cell
NASA Astrophysics Data System (ADS)
Magno, A. C. G.; Oliveira, I. L.; Hauck, J. V. S.
2016-08-01
The main goal of this work is to add and analyze an equation that represents the volume in a dynamical model of the mammalian cell cycle proposed by Gérard and Goldbeter (2011) [1]. The cell division occurs when the cyclinB/Cdkl complex is totally degraded (Tyson and Novak, 2011)[2] and it reaches a minimum value. At this point, the cell is divided into two newborn daughter cells and each one will contain the half of the cytoplasmic content of the mother cell. The equations of our base model are only valid if the cell volume, where the reactions occur, is constant. Whether the cell volume is not constant, that is, the rate of change of its volume with respect to time is explicitly taken into account in the mathematical model, then the equations of the original model are no longer valid. Therefore, every equations were modified from the mass conservation principle for considering a volume that changes with time. Through this approach, the cell volume affects all model variables. Two different dynamic simulation methods were accomplished: deterministic and stochastic. In the stochastic simulation, the volume affects every model's parameters which have molar unit, whereas in the deterministic one, it is incorporated into the differential equations. In deterministic simulation, the biochemical species may be in concentration units, while in stochastic simulation such species must be converted to number of molecules which are directly proportional to the cell volume. In an effort to understand the influence of the new equation a stability analysis was performed. This elucidates how the growth factor impacts the stability of the model's limit cycles. In conclusion, a more precise model, in comparison to the base model, was created for the cell cycle as it now takes into consideration the cell volume variation
A multiscale MDCT image-based breathing lung model with time-varying regional ventilation
Yin, Youbing; Choi, Jiwoong; Hoffman, Eric A.; Tawhai, Merryn H.; Lin, Ching-Long
2012-01-01
A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C1 continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung. PMID:23794749
Temporal validation for landsat-based volume estimation model
Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan
2015-01-01
Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...
DOT National Transportation Integrated Search
1983-11-01
Volume 1 of this report describes model tests and analytical studies based on experience, interviews with design engineers, and literature reviews, carried out to develop design recommendations for concrete tunnel linings. Volume 2 contains the propo...
NASA Astrophysics Data System (ADS)
Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi
2018-05-01
The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.
Numerical simulation of convective heat transfer of nonhomogeneous nanofluid using Buongiorno model
NASA Astrophysics Data System (ADS)
Sayyar, Ramin Onsor; Saghafian, Mohsen
2017-08-01
The aim is to study the assessment of the flow and convective heat transfer of laminar developing flow of Al2O3-water nanofluid inside a vertical tube. A finite volume method procedure on a structured grid was used to solve the governing partial differential equations. The adopted model (Buongiorno model) assumes that the nanofluid is a mixture of a base fluid and nanoparticles, with the relative motion caused by Brownian motion and thermophoretic diffusion. The results showed the distribution of nanoparticles remained almost uniform except in a region near the hot wall where nanoparticles volume fraction were reduced as a result of thermophoresis. The simulation results also indicated there is an optimal volume fraction about 1-2% of the nanoparticles at each Reynolds number for which the maximum performance evaluation criteria can be obtained. The difference between Nusselt number and nondimensional pressure drop calculated based on two phase model and the one calculated based on single phase model was less than 5% at all nanoparticles volume fractions and can be neglected. In natural convection, for 4% of nanoparticles volume fraction, in Gr = 10 more than 15% enhancement of Nusselt number was achieved but in Gr = 300 it was less than 1%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bondar, M.L., E-mail: m.bondar@erasmusmc.nl; Hoogeman, M.S.; Mens, J.W.
2012-08-01
Purpose: To design and evaluate individualized nonadaptive and online-adaptive strategies based on a pretreatment established motion model for the highly deformable target volume in cervical cancer patients. Methods and Materials: For 14 patients, nine to ten variable bladder filling computed tomography (CT) scans were acquired at pretreatment and after 40 Gy. Individualized model-based internal target volumes (mbITVs) accounting for the cervix and uterus motion due to bladder volume changes were generated by using a motion-model constructed from two pretreatment CT scans (full and empty bladder). Two individualized strategies were designed: a nonadaptive strategy, using an mbITV accounting for the full-rangemore » of bladder volume changes throughout the treatment; and an online-adaptive strategy, using mbITVs of bladder volume subranges to construct a library of plans. The latter adapts the treatment online by selecting the plan-of-the-day from the library based on the measured bladder volume. The individualized strategies were evaluated by the seven to eight CT scans not used for mbITVs construction, and compared with a population-based approach. Geometric uniform margins around planning cervix-uterus and mbITVs were determined to ensure adequate coverage. For each strategy, the percentage of the cervix-uterus, bladder, and rectum volumes inside the planning target volume (PTV), and the clinical target volume (CTV)-to-PTV volume (volume difference between PTV and CTV) were calculated. Results: The margin for the population-based approach was 38 mm and for the individualized strategies was 7 to 10 mm. Compared with the population-based approach, the individualized nonadaptive strategy decreased the CTV-to-PTV volume by 48% {+-} 6% and the percentage of bladder and rectum inside the PTV by 5% to 45% and 26% to 74% (p < 0.001), respectively. Replacing the individualized nonadaptive strategy by an online-adaptive, two-plan library further decreased the percentage of bladder and rectum inside the PTV (0% to 10% and -1% to 9%; p < 0.004) and the CTV-to-PTV volume (4-96 ml). Conclusions: Compared with population-based margins, an individualized PTV results in better organ-at-risk sparing. Online-adaptive radiotherapy further improves organ-at-risk sparing.« less
NASA Astrophysics Data System (ADS)
Xu, Yongbin; Xie, Haihong; Wu, Liuyi
2018-05-01
The share of coal transportation in the total railway freight volume is about 50%. As is widely acknowledged, coal industry is vulnerable to the economic situation and national policies. Coal transportation volume fluctuates significantly under the new economic normal. Grasp the overall development trend of railway coal transportation market, have important reference and guidance significance to the railway and coal industry decision-making. By analyzing the economic indicators and policy implications, this paper expounds the trend of the coal transportation volume, and further combines the economic indicators with the high correlation with the coal transportation volume with the traditional traffic prediction model to establish a combined forecasting model based on the back propagation neural network. The error of the prediction results is tested, which proves that the method has higher accuracy and has practical application.
Examination of simplified travel demand model. [Internal volume forecasting model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R.L. Jr.; McFarlane, W.J.
1978-01-01
A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less
ERIC Educational Resources Information Center
Wisconsin Univ., Madison. Center for Studies in Vocational and Technical Education.
Volume 3 presents a descriptive outline of the Wisconsin school-based career placement model. The two major objectives for the model are: (1) to maximize the individual student's competencies for independent career functioning and (2) to maximize the availability of career placement options. For orderly transition, each student must receive the…
NASA Technical Reports Server (NTRS)
Sadunas, J. A.; French, E. P.; Sexton, H.
1973-01-01
A 1/25 scale model S-2 stage base region thermal environment test is presented. Analytical results are included which reflect the effect of engine operating conditions, model scale, turbo-pump exhaust gas injection on base region thermal environment. Comparisons are made between full scale flight data, model test data, and analytical results. The report is prepared in two volumes. The description of analytical predictions and comparisons with flight data are presented. Tabulation of the test data is provided.
DOT National Transportation Integrated Search
2002-09-01
This is volume I1 of a two-volume report of a study to increase the scope and clarity of air pollution models for : depressed highway and street canyon sites. It presents the atmospheric wind tunnel program conducted to increase the : data base and i...
Airport Performance Model : Volume 2 - User's Manual and Program Documentation
DOT National Transportation Integrated Search
1978-10-01
Volume II contains a User's manual and program documentation for the Airport Performance Model. This computer-based model is written in FORTRAN IV for the DEC-10. The user's manual describes the user inputs to the interactive program and gives sample...
Design and Performance of the Sorbent-Based Atmosphere Revitalization System for Orion
NASA Technical Reports Server (NTRS)
Ritter, James A.; Reynolds, Steven P.; Ebner, Armin D.; Knox, James C.; LeVan, M. Douglas
2007-01-01
Validation and simulations of a real-time dynamic cabin model were conducted on the sorbent-based atmosphere revitalization system for Orion. The dynamic cabin model, which updates the concentration of H2O and CO2 every second during the simulation, was able to predict the steady state model values for H2O and CO2 for long periods of steady metabolic production for a 4-person crew. It also showed similar trends for the exercise periods, where there were quick changes in production rates. Once validated, the cabin model was used to determine the effects of feed flow rate, cabin volume and column volume. A higher feed flow rate reduced the cabin concentrations only slightly over the base case, a larger cabin volume was able to reduce the cabin concentrations even further, and the lower column volume led to much higher cabin concentrations. Finally, the cabin model was used to determine the effect of the amount of silica gel in the column. As the amount increased, the cabin concentration of H2O decreased, but the cabin concentration of CO2 increased.
Effect of Cross-Linking on Free Volume Properties of PEG Based Thiol-Ene Networks
NASA Astrophysics Data System (ADS)
Ramakrishnan, Ramesh; Vasagar, Vivek; Nazarenko, Sergei
According to the Fox and Loshaek theory, in elastomeric networks, free volume decreases linearly with the cross-link density increase. The aim of this study is to show whether the poly(ethylene glycol) (PEG) based multicomponent thiol-ene elastomeric networks demonstrate this model behavior? Networks with a broad cross-link density range were prepared by changing the ratio of the trithiol crosslinker to PEG dithiol and then UV cured with PEG diene while maintaining 1:1 thiol:ene stoichiometry. Pressure-volume-temperature (PVT) data of the networks was generated from the high pressure dilatometry experiments which was fit using the Simha-Somcynsky Equation-of-State analysis to obtain the fractional free volume of the networks. Using Positron Annihilation Lifetime Spectroscopy (PALS) analysis, the average free volume hole size of the networks was also quantified. The fractional free volume and the average free volume hole size showed a linear change with the cross-link density confirming that the Fox and Loshaek theory can be applied to this multicomponent system. Gas diffusivities of the networks showed a good correlation with free volume. A free volume based model was developed to describe the gas diffusivity trends as a function of cross-link density.
A Well-Clear Volume Based on Time to Entry Point
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony J.; Munoz, Cesar A.; Upchurch, Jason M.; Chamberlain, James P.; Consiglio, Maria C.
2014-01-01
A well-clear volume is a key component of NASA's Separation Assurance concept for the integration of UAS in the NAS. This paper proposes a mathematical definition of the well-clear volume that uses, in addition to distance thresholds, a time threshold based on time to entry point (TEP). The mathematical model that results from this definition is more conservative than other candidate definitions of the wellclear volume that are based on range over closure rate and time to closest point of approach.
Polidori, David; Rowley, Clarence
2014-07-22
The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.
DOT National Transportation Integrated Search
1973-02-01
The volume presents the models used to analyze basic features of the system, establish feasibility of techniques, and evaluate system performance. The models use analytical expressions and computer simulations to represent the relationship between sy...
Estimation of truck volumes and flows
DOT National Transportation Integrated Search
2004-08-01
This research presents a statistical approach for estimating truck volumes, based : primarily on classification counts and information on roadway functionality, employment, : sales volume and number of establishments within the state. Models have bee...
Multivariate regression model for partitioning tree volume of white oak into round-product classes
Daniel A. Yaussy; David L. Sonderman
1984-01-01
Describes the development of multivariate equations that predict the expected cubic volume of four round-product classes from independent variables composed of individual tree-quality characteristics. Although the model has limited application at this time, it does demonstrate the feasibility of partitioning total tree cubic volume into round-product classes based on...
Dynamic soft tissue deformation estimation based on energy analysis
NASA Astrophysics Data System (ADS)
Gao, Dedong; Lei, Yong; Yao, Bin
2016-10-01
The needle placement accuracy of millimeters is required in many needle-based surgeries. The tissue deformation, especially that occurring on the surface of organ tissue, affects the needle-targeting accuracy of both manual and robotic needle insertions. It is necessary to understand the mechanism of tissue deformation during needle insertion into soft tissue. In this paper, soft tissue surface deformation is investigated on the basis of continuum mechanics, where a geometry model is presented to quantitatively approximate the volume of tissue deformation. The energy-based method is presented to the dynamic process of needle insertion into soft tissue based on continuum mechanics, and the volume of the cone is exploited to quantitatively approximate the deformation on the surface of soft tissue. The external work is converted into potential, kinetic, dissipated, and strain energies during the dynamic rigid needle-tissue interactive process. The needle insertion experimental setup, consisting of a linear actuator, force sensor, needle, tissue container, and a light, is constructed while an image-based method for measuring the depth and radius of the soft tissue surface deformations is introduced to obtain the experimental data. The relationship between the changed volume of tissue deformation and the insertion parameters is created based on the law of conservation of energy, with the volume of tissue deformation having been obtained using image-based measurements. The experiments are performed on phantom specimens, and an energy-based analytical fitted model is presented to estimate the volume of tissue deformation. The experimental results show that the energy-based analytical fitted model can predict the volume of soft tissue deformation, and the root mean squared errors of the fitting model and experimental data are 0.61 and 0.25 at the velocities 2.50 mm/s and 5.00 mm/s. The estimating parameters of the soft tissue surface deformations are proven to be useful for compensating the needle-targeting error in the rigid needle insertion procedure, especially for percutaneous needle insertion into organs.
An Agent Based Collaborative Simplification of 3D Mesh Model
NASA Astrophysics Data System (ADS)
Wang, Li-Rong; Yu, Bo; Hagiwara, Ichiro
Large-volume mesh model faces the challenge in fast rendering and transmission by Internet. The current mesh models obtained by using three-dimensional (3D) scanning technology are usually very large in data volume. This paper develops a mobile agent based collaborative environment on the development platform of mobile-C. Communication among distributed agents includes grasping image of visualized mesh model, annotation to grasped image and instant message. Remote and collaborative simplification can be efficiently conducted by Internet.
Paynter, Ian; Genest, Daniel; Peri, Francesco; Schaaf, Crystal
2018-04-06
Volumetric models with known biases are shown to provide bounds for the uncertainty in estimations of volume for ecologically interesting objects, observed with a terrestrial laser scanner (TLS) instrument. Bounding cuboids, three-dimensional convex hull polygons, voxels, the Outer Hull Model and Square Based Columns (SBCs) are considered for their ability to estimate the volume of temperate and tropical trees, as well as geomorphological features such as bluffs and saltmarsh creeks. For temperate trees, supplementary geometric models are evaluated for their ability to bound the uncertainty in cylinder-based reconstructions, finding that coarser volumetric methods do not currently constrain volume meaningfully, but may be helpful with further refinement, or in hybridized models. Three-dimensional convex hull polygons consistently overestimate object volume, and SBCs consistently underestimate volume. Voxel estimations vary in their bias, due to the point density of the TLS data, and occlusion, particularly in trees. The response of the models to parametrization is analysed, observing unexpected trends in the SBC estimates for the drumlin dataset. Establishing that this result is due to the resolution of the TLS observations being insufficient to support the resolution of the geometric model, it is suggested that geometric models with predictable outcomes can also highlight data quality issues when they produce illogical results.
Bounding uncertainty in volumetric geometric models for terrestrial lidar observations of ecosystems
Genest, Daniel; Peri, Francesco; Schaaf, Crystal
2018-01-01
Volumetric models with known biases are shown to provide bounds for the uncertainty in estimations of volume for ecologically interesting objects, observed with a terrestrial laser scanner (TLS) instrument. Bounding cuboids, three-dimensional convex hull polygons, voxels, the Outer Hull Model and Square Based Columns (SBCs) are considered for their ability to estimate the volume of temperate and tropical trees, as well as geomorphological features such as bluffs and saltmarsh creeks. For temperate trees, supplementary geometric models are evaluated for their ability to bound the uncertainty in cylinder-based reconstructions, finding that coarser volumetric methods do not currently constrain volume meaningfully, but may be helpful with further refinement, or in hybridized models. Three-dimensional convex hull polygons consistently overestimate object volume, and SBCs consistently underestimate volume. Voxel estimations vary in their bias, due to the point density of the TLS data, and occlusion, particularly in trees. The response of the models to parametrization is analysed, observing unexpected trends in the SBC estimates for the drumlin dataset. Establishing that this result is due to the resolution of the TLS observations being insufficient to support the resolution of the geometric model, it is suggested that geometric models with predictable outcomes can also highlight data quality issues when they produce illogical results. PMID:29503722
Left ventricular endocardial surface detection based on real-time 3D echocardiographic data
NASA Technical Reports Server (NTRS)
Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.
2001-01-01
OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.
Cost Model/Data Base Catalog Non-DoD/Academic Survey. Volume 1. Project Summary
1988-10-30
presented in two volumes: Volume 1- Project Summary, and L .JD Volume 2- Final Data Base. J Accesion - For NTIS C R A& Disiji( .. . U, L)~ .6I...218 47I I I I I I I I Exhibit 111-3. COMPLETE CATALOG BREAKOUT I MANAGEMENT CONSULTING & RESEARCH, INC. j 111-6 I IE-I Iu 0 HE-4 X C.) E- Ix UI.n 111...College/EDCCAir University Maxwell Air Force Base, AL 36112 2. AD (Armament Division) Department of the Air Force Armament Division/(subdiv code
NASA Astrophysics Data System (ADS)
Joshi, Pranit Satish; Mahapatra, Pallab Sinha; Pattamatta, Arvind
2017-12-01
Experiments and numerical simulation of natural convection heat transfer with nanosuspensions are presented in this work. The investigations are carried out for three different types of nanosuspensions: namely, spherical-based (alumina/water), tubular-based (multi-walled carbon nanotube/water), and flake-based (graphene/water). A comparison with in-house experiments is made for all the three nanosuspensions at different volume fractions and for the Rayleigh numbers in the range of 7 × 105-1 × 107. Different models such as single component homogeneous, single component non-homogeneous, and multicomponent non-homogeneous are used in the present study. From the present numerical investigation, it is observed that for lower volume fractions (˜0.1%) of nanosuspensions considered, single component models are in close agreement with the experimental results. Single component models which are based on the effective properties of the nanosuspensions alone can predict heat transfer characteristics very well within the experimental uncertainty. Whereas for higher volume fractions (˜0.5%), the multi-component model predicts closer results to the experimental observation as it incorporates drag-based slip force which becomes prominent. The enhancement observed at lower volume fractions for non-spherical particles is attributed to the percolation chain formation, which perturbs the boundary layer and thereby increases the local Nusselt number values.
Massaroni, Carlo; Cassetta, Eugenio; Silvestri, Sergio
2017-10-01
Respiratory assessment can be carried out by using motion capture systems. A geometrical model is mandatory in order to compute the breathing volume as a function of time from the markers' trajectories. This study describes a novel model to compute volume changes and calculate respiratory parameters by using a motion capture system. The novel method, ie, prism-based method, computes the volume enclosed within the chest by defining 82 prisms from the 89 markers attached to the subject chest. Volumes computed with this method are compared to spirometry volumes and to volumes computed by a conventional method based on the tetrahedron's decomposition of the chest wall and integrated in a commercial motion capture system. Eight healthy volunteers were enrolled and 30 seconds of quiet breathing data collected from each of them. Results show a better agreement between volumes computed by the prism-based method and the spirometry (discrepancy of 2.23%, R 2 = .94) compared to the agreement between volumes computed by the conventional method and the spirometry (discrepancy of 3.56%, R 2 = .92). The proposed method also showed better performances in the calculation of respiratory parameters. Our findings open up prospects for the further use of the new method in the breathing assessment via motion capture systems.
NASA Astrophysics Data System (ADS)
Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha
2018-01-01
It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.
Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D'Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto
2012-12-27
Hypothyroidism is a frequent late side effect of radiation therapy of the cervical region. Purpose of this work is to develop multivariate normal tissue complication probability (NTCP) models for radiation-induced hypothyroidism (RHT) and to compare them with already existing NTCP models for RHT. Fifty-three patients treated with sequential chemo-radiotherapy for Hodgkin's lymphoma (HL) were retrospectively reviewed for RHT events. Clinical information along with thyroid gland dose distribution parameters were collected and their correlation to RHT was analyzed by Spearman's rank correlation coefficient (Rs). Multivariate logistic regression method using resampling methods (bootstrapping) was applied to select model order and parameters for NTCP modeling. Model performance was evaluated through the area under the receiver operating characteristic curve (AUC). Models were tested against external published data on RHT and compared with other published NTCP models. If we express the thyroid volume exceeding X Gy as a percentage (Vx(%)), a two-variable NTCP model including V30(%) and gender resulted to be the optimal predictive model for RHT (Rs = 0.615, p < 0.001. AUC = 0.87). Conversely, if absolute thyroid volume exceeding X Gy (Vx(cc)) was analyzed, an NTCP model based on 3 variables including V30(cc), thyroid gland volume and gender was selected as the most predictive model (Rs = 0.630, p < 0.001. AUC = 0.85). The three-variable model performs better when tested on an external cohort characterized by large inter-individuals variation in thyroid volumes (AUC = 0.914, 95% CI 0.760-0.984). A comparable performance was found between our model and that proposed in the literature based on thyroid gland mean dose and volume (p = 0.264). The absolute volume of thyroid gland exceeding 30 Gy in combination with thyroid gland volume and gender provide an NTCP model for RHT with improved prediction capability not only within our patient population but also in an external cohort.
Case mix-adjusted cost of colectomy at low-, middle-, and high-volume academic centers.
Chang, Alex L; Kim, Young; Ertel, Audrey E; Hoehn, Richard S; Wima, Koffi; Abbott, Daniel E; Shah, Shimul A
2017-05-01
Efforts to regionalize surgery based on thresholds in procedure volume may have consequences on the cost of health care delivery. This study aims to delineate the relationship between hospital volume, case mix, and variability in the cost of operative intervention using colectomy as the model. All patients undergoing colectomy (n = 90,583) at 183 academic hospitals from 2009-2012 in The University HealthSystems Consortium Database were studied. Patient and procedure details were used to generate a case mix-adjusted predictive model of total direct costs. Observed to expected costs for each center were evaluated between centers based on overall procedure volume. Patient and procedure characteristics were significantly different between volume tertiles. Observed costs at high-volume centers were less than at middle- and low-volume centers. According to our predictive model, high-volume centers cared for a less expensive case mix than middle- and low-volume centers ($12,786 vs $13,236 and $14,497, P < .01). Our predictive model accounted for 44% of the variation in costs. Overall efficiency (standardized observed to expected costs) was greatest at high-volume centers compared to middle- and low-volume tertiles (z score -0.16 vs 0.02 and -0.07, P < .01). Hospital costs and cost efficiency after an elective colectomy varies significantly between centers and may be attributed partially to the patient differences at those centers. These data demonstrate that a significant proportion of the cost variation is due to a distinct case mix at low-volume centers, which may lead to perceived poor performance at these centers. Copyright © 2016 Elsevier Inc. All rights reserved.
Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui
2013-01-01
Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes, and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographical image of food contained in a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc.) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image. PMID:24223474
NASA Astrophysics Data System (ADS)
Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui
2013-10-01
Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographic image of food contained on a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image-based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image.
2014-01-01
Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018
Ueda, Yoshihiro; Fukunaga, Jun-Ichi; Kamima, Tatsuya; Adachi, Yumiko; Nakamatsu, Kiyoshi; Monzen, Hajime
2018-03-20
The aim of this study was to evaluate the performance of a commercial knowledge-based planning system, in volumetric modulated arc therapy for prostate cancer at multiple radiation therapy departments. In each institute, > 20 cases were assessed. For the knowledge-based planning, the estimated dose (ED) based on geometric and dosimetric information of plans was generated in the model. Lower and upper limits of estimated dose were saved as dose volume histograms for each organ at risk. To verify whether the models performed correctly, KBP was compared with manual optimization planning in two cases. The relationships between the EDs in the models and the ratio of the OAR volumes overlapping volume with PTV to the whole organ volume (V overlap /V whole ) were investigated. There were no significant dosimetric differences in OARs and PTV between manual optimization planning and knowledge-based planning. In knowledge-based planning, the difference in the volume ratio of receiving 90% and 50% of the prescribed dose (V90 and V50) between institutes were more than 5.0% and 10.0%, respectively. The calculated doses with knowledge-based planning were between the upper and lower limits of ED or slightly under the lower limit of ED. The relationships between the lower limit of ED and V overlap /V whole were different among the models. In the V90 and V50 for the rectum, the maximum differences between the lower limit of ED among institutes were 8.2% and 53.5% when V overlap /V whole for the rectum was 10%. In the V90 and V50 for the bladder, the maximum differences of the lower limit of ED among institutes were 15.1% and 33.1% when V overlap /V whole for the bladder was 10%. Organs' upper and lower limits of ED in the models correlated closely with the V overlap /V whole . It is important to determine whether the models in KBP match a different institute's plan design before the models can be shared.
Development of Parametric Mass and Volume Models for an Aerospace SOFC/Gas Turbine Hybrid System
NASA Technical Reports Server (NTRS)
Tornabene, Robert; Wang, Xiao-yen; Steffen, Christopher J., Jr.; Freeh, Joshua E.
2005-01-01
In aerospace power systems, mass and volume are key considerations to produce a viable design. The utilization of fuel cells is being studied for a commercial aircraft electrical power unit. Based on preliminary analyses, a SOFC/gas turbine system may be a potential solution. This paper describes the parametric mass and volume models that are used to assess an aerospace hybrid system design. The design tool utilizes input from the thermodynamic system model and produces component sizing, performance, and mass estimates. The software is designed such that the thermodynamic model is linked to the mass and volume model to provide immediate feedback during the design process. It allows for automating an optimization process that accounts for mass and volume in its figure of merit. Each component in the system is modeled with a combination of theoretical and empirical approaches. A description of the assumptions and design analyses is presented.
Radar volume reflectivity estimation using an array of ground-based rainfall drop size detectors
NASA Astrophysics Data System (ADS)
Lane, John; Merceret, Francis; Kasparis, Takis; Roy, D.; Muller, Brad; Jones, W. Linwood
2000-08-01
Rainfall drop size distribution (DSD) measurements made by single disdrometers at isolated ground sites have traditionally been used to estimate the transformation between weather radar reflectivity Z and rainfall rate R. Despite the immense disparity in sampling geometries, the resulting Z-R relation obtained by these single point measurements has historically been important in the study of applied radar meteorology. Simultaneous DSD measurements made at several ground sites within a microscale area may be used to improve the estimate of radar reflectivity in the air volume surrounding the disdrometer array. By applying the equations of motion for non-interacting hydrometers, a volume estimate of Z is obtained from the array of ground based disdrometers by first calculating a 3D drop size distribution. The 3D-DSD model assumes that only gravity and terminal velocity due to atmospheric drag within the sampling volume influence hydrometer dynamics. The sampling volume is characterized by wind velocities, which are input parameters to the 3D-DSD model, composed of vertical and horizontal components. Reflectivity data from four consecutive WSR-88D volume scans, acquired during a thunderstorm near Melbourne, FL on June 1, 1997, are compared to data processed using the 3D-DSD model and data form three ground based disdrometers of a microscale array.
NASA Astrophysics Data System (ADS)
Li, Mingming; Li, Lin; Li, Qiang; Zou, Zongshu
2018-05-01
A filter-based Euler-Lagrange multiphase flow model is used to study the mixing behavior in a combined blowing steelmaking converter. The Euler-based volume of fluid approach is employed to simulate the top blowing, while the Lagrange-based discrete phase model that embeds the local volume change of rising bubbles for the bottom blowing. A filter-based turbulence method based on the local meshing resolution is proposed aiming to improve the modeling of turbulent eddy viscosities. The model validity is verified through comparison with physical experiments in terms of mixing curves and mixing times. The effects of the bottom gas flow rate on bath flow and mixing behavior are investigated and the inherent reasons for the mixing result are clarified in terms of the characteristics of bottom-blowing plumes, the interaction between plumes and top-blowing jets, and the change of bath flow structure.
Personalized models of bones based on radiographic photogrammetry.
Berthonnaud, E; Hilmi, R; Dimnet, J
2009-07-01
The radiographic photogrammetry is applied, for locating anatomical landmarks in space, from their two projected images. The goal of this paper is to define a personalized geometric model of bones, based uniquely on photogrammetric reconstructions. The personalized models of bones are obtained from two successive steps: their functional frameworks are first determined experimentally, then, the 3D bone representation results from modeling techniques. Each bone functional framework is issued from direct measurements upon two radiographic images. These images may be obtained using either perpendicular (spine and sacrum) or oblique incidences (pelvis and lower limb). Frameworks link together their functional axes and punctual landmarks. Each global bone volume is decomposed in several elementary components. Each volumic component is represented by simple geometric shapes. Volumic shapes are articulated to the patient's bone structure. The volumic personalization is obtained by best fitting the geometric model projections to their real images, using adjustable articulations. Examples are presented to illustrating the technique of personalization of bone volumes, directly issued from the treatment of only two radiographic images. The chosen techniques for treating data are then discussed. The 3D representation of bones completes, for clinical users, the information brought by radiographic images.
Multiscale Modeling of Angiogenesis and Predictive Capacity
NASA Astrophysics Data System (ADS)
Pillay, Samara; Byrne, Helen; Maini, Philip
Tumors induce the growth of new blood vessels from existing vasculature through angiogenesis. Using an agent-based approach, we model the behavior of individual endothelial cells during angiogenesis. We incorporate crowding effects through volume exclusion, motility of cells through biased random walks, and include birth and death-like processes. We use the transition probabilities associated with the discrete model and a discrete conservation equation for cell occupancy to determine collective cell behavior, in terms of partial differential equations (PDEs). We derive three PDE models incorporating single, multi-species and no volume exclusion. By fitting the parameters in our PDE models and other well-established continuum models to agent-based simulations during a specific time period, and then comparing the outputs from the PDE models and agent-based model at later times, we aim to determine how well the PDE models predict the future behavior of the agent-based model. We also determine whether predictions differ across PDE models and the significance of those differences. This may impact drug development strategies based on PDE models.
Are PCI Service Volumes Associated with 30-Day Mortality? A Population-Based Study from Taiwan.
Yu, Tsung-Hsien; Chou, Ying-Yi; Wei, Chung-Jen; Tung, Yu-Chi
2017-11-09
The volume-outcome relationship has been discussed for over 30 years; however, the findings are inconsistent. This might be due to the heterogeneity of service volume definitions and categorization methods. This study takes percutaneous coronary intervention (PCI) as an example to examine whether the service volume was associated with PCI 30-day mortality, given different service volume definitions and categorization methods. A population-based, cross-sectional multilevel study was conducted. Two definitions of physician and hospital volume were used: (1) the cumulative PCI volume in a previous year before each PCI; (2) the cumulative PCI volume within the study period. The volume was further treated in three ways: (1) a categorical variable based on the American Heart Association's recommendation; (2) a semi-data-driven categorical variable based on k-means clustering algorithm; and (3) a data-driven categorical variable based on the Generalized Additive Model. The results showed that, after adjusting the patient-, physician-, and hospital-level covariates, physician volume was associated inversely with PCI 30-day mortality, but hospital volume was not, no matter which definitions and categorization methods of service volume were applied. Physician volume is negatively associated with PCI 30-day mortality, but the results might vary because of definition and categorization method.
Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Laksar, Sarbani; Tozzi, Angelo; Scorsetti, Marta; Cozzi, Luca
2015-10-31
To evaluate the performance of a broad scope model-based optimisation process for volumetric modulated arc therapy applied to esophageal cancer. A set of 70 previously treated patients in two different institutions, were selected to train a model for the prediction of dose-volume constraints. The model was built with a broad-scope purpose, aiming to be effective for different dose prescriptions and tumour localisations. It was validated on three groups of patients from the same institution and from another clinic not providing patients for the training phase. Comparison of the automated plans was done against reference cases given by the clinically accepted plans. Quantitative improvements (statistically significant for the majority of the analysed dose-volume parameters) were observed between the benchmark and the test plans. Of 624 dose-volume objectives assessed for plan evaluation, in 21 cases (3.3 %) the reference plans failed to respect the constraints while the model-based plans succeeded. Only in 3 cases (<0.5 %) the reference plans passed the criteria while the model-based failed. In 5.3 % of the cases both groups of plans failed and in the remaining cases both passed the tests. Plans were optimised using a broad scope knowledge-based model to determine the dose-volume constraints. The results showed dosimetric improvements when compared to the benchmark data. Particularly the plans optimised for patients from the third centre, not participating to the training, resulted in superior quality. The data suggests that the new engine is reliable and could encourage its application to clinical practice.
Ambros Berger; Thomas Gschwantner; Ronald E. McRoberts; Klemens Schadauer
2014-01-01
National forest inventories typically estimate individual tree volumes using models that rely on measurements of predictor variables such as tree height and diameter, both of which are subject to measurement error. The aim of this study was to quantify the impacts of these measurement errors on the uncertainty of the model-based tree stem volume estimates. The impacts...
Lee, G H; Hur, W; Bremmon, C E; Flickinger, M C
1996-03-20
A simulation was developed based on experimental data obtained in a 14-L reactor to predict the growth and L-lysine accumulation kinetics, and change in volume of a large-scale (250-m(3)) Bacillus methanolicus methanol-based process. Homoserine auxotrophs of B. methanolicus MGA3 are unique methylotrophs because of the ability to secrete lysine during aerobic growth and threonine starvation at 50 degrees C. Dissolved methanol (100 mM), pH, dissolved oxygen tension (0.063 atm), and threonine levels were controlled to obtain threonine-limited conditions and high-cell density (25 g dry cell weight/L) in a 14-L reactor. As a fed-batch process, the additions of neat methanol (fed on demand), threonine, and other nutrients cause the volume of the fermentation to increase and the final lysine concentration to decrease. In addition, water produced as a result of methanol metabolism contributes to the increase in the volume of the reactor. A three-phase approach was used to predict the rate of change of culture volume based on carbon dioxide production and methanol consumption. This model was used for the evaluation of volume control strategies to optimize lysine productivity. A constant volume reactor process with variable feeding and continuous removal of broth and cells (VF(cstr)) resulted in higher lysine productivity than a fed-batch process without volume control. This model predicts the variation in productivity of lysine with changes in growth and in specific lysine productivity. Simple modifications of the model allows one to investigate other high-lysine-secreting strains with different growth and lysine productivity characteristics. Strain NOA2#13A5-2 which secretes lysine and other end-products were modeled using both growth and non-growth-associated lysine productivity. A modified version of this model was used to simulate the change in culture volume of another L-lysine producing mutant (NOA2#13A52-8A66) with reduced secretion of end-products. The modified simulation indicated that growth-associated production dominates in strain NOA2#13A52-8A66. (c) 1996 John Wiley & Sons, Inc.
Modeling of hot-mix asphalt compaction : a thermodynamics-based compressible viscoelastic model
DOT National Transportation Integrated Search
2010-12-01
Compaction is the process of reducing the volume of hot-mix asphalt (HMA) by the application of external forces. As a result of compaction, the volume of air voids decreases, aggregate interlock increases, and interparticle friction increases. The qu...
NASA Astrophysics Data System (ADS)
Xu, Jie; Wu, Tao; Peng, Chuang; Adegbite, Stephen
2017-09-01
The geometric Plateau border model for closed cell polyurethane foam was developed based on volume integrations of approximated 3D four-cusp hypocycloid structure. The tetrahedral structure of convex struts was orthogonally projected into 2D three-cusp deltoid with three central cylinders. The idealized single unit strut was modeled by superposition. The volume of each component was calculated by geometric analyses. The strut solid fraction f s and foam porosity coefficient δ were calculated based on representative elementary volume of Kelvin and Weaire-Phelan structures. The specific surface area Sv derived respectively from packing structures and deltoid approximation model were put into contrast against strut dimensional ratio ɛ. The characteristic foam parameters obtained from this semi-empirical model were further employed to predict foam thermal conductivity.
A Biomechanical Model for Lung Fibrosis in Proton Beam Therapy
NASA Astrophysics Data System (ADS)
King, David J. S.
The physics of protons makes them well-suited to conformal radiotherapy due to the well-known Bragg peak effect. From a proton's inherent stopping power, uncertainty effects can cause a small amount of dose to overflow to an organ at risk (OAR). Previous models for calculating normal tissue complication probabilities (NTCPs) relied on the equivalent uniform dose model (EUD), in which the organ was split into 1/3, 2/3 or whole organ irradiation. However, the problem of dealing with volumes <1/3 of the total volume renders this EUD based approach no longer applicable. In this work the case for an experimental data-based replacement at low volumes is investigated. Lung fibrosis is investigated as an NTCP effect typically arising from dose overflow from tumour irradiation at the spinal base. Considering a 3D geometrical model of the lungs, irradiations are modelled with variable parameters of dose overflow. To calculate NTCPs without the EUD model, experimental data is used from the quantitative analysis of normal tissue effects in the clinic (QUANTEC) data. Additional side projects are also investigated, introduced and explained at various points. A typical radiotherapy course for the patient of 30x2Gy per fraction is simulated. A range of geometry of the target volume and irradiation types is investigated. Investigations with X-rays found the majority of the data point ratios (ratio of EUD values found from calculation based and data based methods) at 20% within unity showing a relatively close agreement. The ratios have not systematically preferred one particular type of predictive method. No Vx metric was found to consistently outperform another. In certain cases there is a good agreement and not in other cases which can be found predicted in the literature. The overall results leads to conclusion that there is no reason to discount the use of the data based predictive method particularly, as a low volume replacement predictive method.
Modeling and predicting tumor response in radioligand therapy.
Kletting, Peter; Thieme, Anne; Eberhardt, Nina; Rinscheid, Andreas; D'Alessandria, Calogero; Allmann, Jakob; Wester, Hans-Jürgen; Tauber, Robert; Beer, Ambros J; Glatting, Gerhard; Eiber, Matthias
2018-05-10
The aim of this work was to develop a theranostic method that allows predicting PSMA-positive tumor volume after radioligand therapy (RLT) based on a pre-therapeutic PET/CT measurement and physiologically based pharmacokinetic/dynamic (PBPK/PD) modeling at the example of RLT using 177 Lu-labeled PSMA for imaging and therapy (PSMA I&T). Methods: A recently developed PBPK model for 177 Lu PSMA I&T RLT was extended to account for tumor (exponential) growth and reduction due to irradiation (linear quadratic model). Data of 13 patients with metastatic castration-resistant prostate cancer (mCRPC) were retrospectively analyzed. Pharmacokinetic/dynamic parameters were simultaneously fitted in a Bayesian framework to PET/CT activity concentrations, planar scintigraphy data and tumor volumes prior and post (6 weeks) therapy. The method was validated using the leave-one-out Jackknife method. The tumor volume post therapy was predicted based on pre-therapy PET/CT imaging and PBPK/PD modeling. Results: The relative deviation of the predicted and measured tumor volume for PSMA-positive tumor cells (6 weeks post therapy) was 1±40% excluding one patient (PSA negative) from the population. The radiosensitivity for the PSA positive patients was determined to be 0.0172±0.0084 Gy-1. Conclusion: The proposed method is the first attempt to solely use PET/CT and modeling methods to predict the PSMA-positive tumor volume after radioligand therapy. Internal validation shows that this is feasible with an acceptable accuracy. Improvement of the method and external validation of the model is ongoing. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Model-based flow rate control for an orfice-type low-volume air sampler
USDA-ARS?s Scientific Manuscript database
The standard method of measuring air suspended particulate matter concentration per volume of air consists of continuously drawing a defined volume of air across a filter over an extended period of time, then measuring the mass of the filtered particles and dividing it by the total volume sampled ov...
NASA Astrophysics Data System (ADS)
Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-01
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgess, Ward A.; Tapriyal, Deepak; Morreale, Bryan D.
2013-12-01
This research focuses on providing the petroleum reservoir engineering community with robust models of hydrocarbon density and viscosity at the extreme temperature and pressure conditions (up to 533 K and 276 MPa, respectively) characteristic of ultra-deep reservoirs, such as those associated with the deepwater wells in the Gulf of Mexico. Our strategy is to base the volume-translated (VT) Peng–Robinson (PR) and Soave–Redlich–Kwong (SRK) cubic equations of state (EoSs) and perturbed-chain, statistical associating fluid theory (PC-SAFT) on an extensive data base of high temperature (278–533 K), high pressure (6.9–276 MPa) density rather than fitting the models to low pressure saturated liquidmore » density data. This high-temperature, high-pressure (HTHP) data base consists of literature data for hydrocarbons ranging from methane to C{sub 40}. The three new models developed in this work, HTHP VT-PR EoS, HTHP VT-SRK EoS, and hybrid PC-SAFT, yield mean absolute percent deviation values (MAPD) for HTHP hydrocarbon density of ~2.0%, ~1.5%, and <1.0%, respectively. An effort was also made to provide accurate hydrocarbon viscosity models based on literature data. Viscosity values are estimated with the frictional theory (f-theory) and free volume (FV) theory of viscosity. The best results were obtained when the PC-SAFT equation was used to obtain both the attractive and repulsive pressure inputs to f-theory, and the density input to FV theory. Both viscosity models provide accurate results at pressures to 100 MPa but experimental and model results can deviate by more than 25% at pressures above 200 MPa.« less
Control volume based hydrocephalus research; analysis of human data
NASA Astrophysics Data System (ADS)
Cohen, Benjamin; Wei, Timothy; Voorhees, Abram; Madsen, Joseph; Anor, Tomer
2010-11-01
Hydrocephalus is a neuropathophysiological disorder primarily diagnosed by increased cerebrospinal fluid volume and pressure within the brain. To date, utilization of clinical measurements have been limited to understanding of the relative amplitude and timing of flow, volume and pressure waveforms; qualitative approaches without a clear framework for meaningful quantitative comparison. Pressure volume models and electric circuit analogs enforce volume conservation principles in terms of pressure. Control volume analysis, through the integral mass and momentum conservation equations, ensures that pressure and volume are accounted for using first principles fluid physics. This approach is able to directly incorporate the diverse measurements obtained by clinicians into a simple, direct and robust mechanics based framework. Clinical data obtained for analysis are discussed along with data processing techniques used to extract terms in the conservation equation. Control volume analysis provides a non-invasive, physics-based approach to extracting pressure information from magnetic resonance velocity data that cannot be measured directly by pressure instrumentation.
A discrete model of Ostwald ripening based on multiple pairwise interactions
NASA Astrophysics Data System (ADS)
Di Nunzio, Paolo Emilio
2018-06-01
A discrete multi-particle model of Ostwald ripening based on direct pairwise interactions is developed for particles with incoherent interfaces as an alternative to the classical LSW mean field theory. The rate of matter exchange depends on the average surface-to-surface interparticle distance, a characteristic feature of the system which naturally incorporates the effect of volume fraction of second phase. The multi-particle diffusion is described through the definition of an interaction volume containing all the particles involved in the exchange of solute. At small volume fractions this is proportional to the size of the central particle, at higher volume fractions it gradually reduces as a consequence of diffusion screening described on a geometrical basis. The topological noise present in real systems is also included. For volume fractions below about 0.1 the model predicts broad and right-skewed stationary size distributions resembling a lognormal function. Above this value, a transition to sharper, more symmetrical but still right-skewed shapes occurs. An excellent agreement with experiments is obtained for 3D particle size distributions of solid-solid and solid-liquid systems with volume fraction 0.07, 0.30, 0.52 and 0.74. The kinetic constant of the model depends on the cube root of volume fraction up to about 0.1, then increases rapidly with an upward concavity. It is in good agreement with the available literature data on solid-liquid mixtures in the volume fraction range from 0.20 to about 0.75.
The impact of surface area, volume, curvature, and Lennard-Jones potential to solvation modeling.
Nguyen, Duc D; Wei, Guo-Wei
2017-01-05
This article explores the impact of surface area, volume, curvature, and Lennard-Jones (LJ) potential on solvation free energy predictions. Rigidity surfaces are utilized to generate robust analytical expressions for maximum, minimum, mean, and Gaussian curvatures of solvent-solute interfaces, and define a generalized Poisson-Boltzmann (GPB) equation with a smooth dielectric profile. Extensive correlation analysis is performed to examine the linear dependence of surface area, surface enclosed volume, maximum curvature, minimum curvature, mean curvature, and Gaussian curvature for solvation modeling. It is found that surface area and surfaces enclosed volumes are highly correlated to each other's, and poorly correlated to various curvatures for six test sets of molecules. Different curvatures are weakly correlated to each other for six test sets of molecules, but are strongly correlated to each other within each test set of molecules. Based on correlation analysis, we construct twenty six nontrivial nonpolar solvation models. Our numerical results reveal that the LJ potential plays a vital role in nonpolar solvation modeling, especially for molecules involving strong van der Waals interactions. It is found that curvatures are at least as important as surface area or surface enclosed volume in nonpolar solvation modeling. In conjugation with the GPB model, various curvature-based nonpolar solvation models are shown to offer some of the best solvation free energy predictions for a wide range of test sets. For example, root mean square errors from a model constituting surface area, volume, mean curvature, and LJ potential are less than 0.42 kcal/mol for all test sets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Tests of a habitat suitability model for black-capped chickadees
Schroeder, Richard L.
1990-01-01
The black-capped chickadee (Parus atricapillus) Habitat Suitability Index (HSI) model provides a quantitative rating of the capability of a habitat to support breeding, based on measures related to food and nest site availability. The model assumption that tree canopy volume can be predicted from measures of tree height and canopy closure was tested using data from foliage volume studies conducted in the riparian cottonwood habitat along the South Platte River in Colorado. Least absolute deviations (LAD) regression showed that canopy cover and over story tree height yielded volume predictions significantly lower than volume estimated by more direct methods. Revisions to these model relations resulted in improved predictions of foliage volume. The relation between the HSI and estimates of black-capped chickadee population densities was examined using LAD regression for both the original model and the model with the foliage volume revisions. Residuals from these models were compared to residuals from both a zero slope model and an ideal model. The fit model for the original HSI differed significantly from the ideal model, whereas the fit model for the original HSI did not differ significantly from the ideal model. However, both the fit model for the original HSI and the fit model for the revised HSI did not differ significantly from a model with a zero slope. Although further testing of the revised model is needed, its use is recommended for more realistic estimates of tree canopy volume and habitat suitability.
An index-flood model for deficit volumes assessment
NASA Astrophysics Data System (ADS)
Strnad, Filip; Moravec, Vojtěch; Hanel, Martin
2017-04-01
The estimation of return periods of hydrological extreme events and the evaluation of risks related to such events are objectives of many water resources studies. The aim of this study is to develop statistical model for drought indices using extreme value theory and index-flood method and to use this model for estimation of return levels of maximum deficit volumes of total runoff and baseflow. Deficit volumes for hundred and thirty-three catchments in the Czech Republic for the period 1901-2015 simulated by a hydrological model Bilan are considered. The characteristics of simulated deficit periods (severity, intensity and length) correspond well to those based on observed data. It is assumed that annual maximum deficit volumes in each catchment follow the generalized extreme value (GEV) distribution. The catchments are divided into three homogeneous regions considering long term mean runoff, potential evapotranspiration and base flow. In line with the index-flood method it is further assumed that the deficit volumes within each homogeneous region are identically distributed after scaling with a site-specific factor. The goodness-of-fit of the statistical model is assessed by Anderson-Darling statistics. For the estimation of critical values of the test several resampling strategies allowing for appropriate handling of years without drought are presented. Finally the significance of the trends in the deficit volumes is assessed by a likelihood ratio test.
Zhang, Baofeng; Kilburg, Denise; Eastman, Peter; Pande, Vijay S; Gallicchio, Emilio
2017-04-15
We present an algorithm to efficiently compute accurate volumes and surface areas of macromolecules on graphical processing unit (GPU) devices using an analytic model which represents atomic volumes by continuous Gaussian densities. The volume of the molecule is expressed by means of the inclusion-exclusion formula, which is based on the summation of overlap integrals among multiple atomic densities. The surface area of the molecule is obtained by differentiation of the molecular volume with respect to atomic radii. The many-body nature of the model makes a port to GPU devices challenging. To our knowledge, this is the first reported full implementation of this model on GPU hardware. To accomplish this, we have used recursive strategies to construct the tree of overlaps and to accumulate volumes and their gradients on the tree data structures so as to minimize memory contention. The algorithm is used in the formulation of a surface area-based non-polar implicit solvent model implemented as an open source plug-in (named GaussVol) for the popular OpenMM library for molecular mechanics modeling. GaussVol is 50 to 100 times faster than our best optimized implementation for the CPUs, achieving speeds in excess of 100 ns/day with 1 fs time-step for protein-sized systems on commodity GPUs. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
FOSSIL2 energy policy model documentation: FOSSIL2 documentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1980-10-01
This report discusses the structure, derivations, assumptions, and mathematical formulation of the FOSSIL2 model. Each major facet of the model - supply/demand interactions, industry financing, and production - has been designed to parallel closely the actual cause/effect relationships determining the behavior of the United States energy system. The data base for the FOSSIL2 program is large, as is appropriate for a system dynamics simulation model. When possible, all data were obtained from sources well known to experts in the energy field. Cost and resource estimates are based on DOE data whenever possible. This report presents the FOSSIL2 model at severalmore » levels. Volumes II and III of this report list the equations that comprise the FOSSIL2 model, along with variable definitions and a cross-reference list of the model variables. Volume II provides the model equations with each of their variables defined, while Volume III lists the equations, and a one line definition for equations, in a shorter, more readable format.« less
geomIO: A tool for geodynamicists to turn 2D cross-sections into 3D geometries
NASA Astrophysics Data System (ADS)
Baumann, Tobias; Bauville, Arthur
2016-04-01
In numerical deformation models, material properties are usually defined on elements (e.g., in body-fitted finite elements), or on a set of Lagrangian markers (Eulerian, ALE or mesh-free methods). In any case, geometrical constraints are needed to assign different material properties to the model domain. Whereas simple geometries such as spheres, layers or cuboids can easily be programmed, it quickly gets complex and time-consuming to create more complicated geometries for numerical model setups, especially in three dimensions. geomIO (geometry I/O, http://geomio.bitbucket.org/) is a MATLAB-based library that has two main functionalities. First, it can be used to create 3D volumes based on series of 2D vector drawings similar to a CAD program; and second, it uses these 3D volumes to assign material properties to the numerical model domain. The drawings can conveniently be created using the open-source vector graphics software Inkscape. Adobe Illustrator is also partially supported. The drawings represent a series of cross-sections in the 3D model domain, for example, cross-sectional interpretations of seismic tomography. geomIO is then used to read the drawings and to create 3D volumes by interpolating between the cross-sections. In the second part, the volumes are used to assign material phases to markers inside the volumes. Multiple volumes can be created at the same time and, depending on the order of assignment, unions or intersections can be built to assign additional material phases. geomIO also offers the possibility to create 3D temperature structures for geodynamic models based on depth dependent parameterisations, for example the half space cooling model. In particular, this can be applied to geometries of subducting slabs of arbitrary shape. Yet, geomIO is held very general, and can be used for a variety of applications. We present examples of setup generation from pictures of micro-scale tectonics and lithospheric scale setups of 3D present-day model geometries.
Action-based Dynamical Modeling for the Milky Way Disk: The Influence of Spiral Arms
NASA Astrophysics Data System (ADS)
Trick, Wilma H.; Bovy, Jo; D'Onghia, Elena; Rix, Hans-Walter
2017-04-01
RoadMapping is a dynamical modeling machinery developed to constrain the Milky Way’s (MW) gravitational potential by simultaneously fitting an axisymmetric parametrized potential and an action-based orbit distribution function (DF) to discrete 6D phase-space measurements of stars in the Galactic disk. In this work, we demonstrate RoadMapping's robustness in the presence of spiral arms by modeling data drawn from an N-body simulation snapshot of a disk-dominated galaxy of MW mass with strong spiral arms (but no bar), exploring survey volumes with radii 500 {pc}≤slant {r}\\max ≤slant 5 {kpc}. The potential constraints are very robust, even though we use a simple action-based DF, the quasi-isothermal DF. The best-fit RoadMapping model always recovers the correct gravitational forces where most of the stars that entered the analysis are located, even for small volumes. For data from large survey volumes, RoadMapping finds axisymmetric models that average well over the spiral arms. Unsurprisingly, the models are slightly biased by the excess of stars in the spiral arms. Gravitational potential models derived from survey volumes with at least {r}\\max =3 {kpc} can be reliably extrapolated to larger volumes. However, a large radial survey extent, {r}\\max ˜ 5 {kpc}, is needed to correctly recover the halo scale length. In general, the recovery and extrapolability of potentials inferred from data sets that were drawn from inter-arm regions appear to be better than those of data sets drawn from spiral arms. Our analysis implies that building axisymmetric models for the Galaxy with upcoming Gaia data will lead to sensible and robust approximations of the MW’s potential.
Modeling dam-break flows using finite volume method on unstructured grid
USDA-ARS?s Scientific Manuscript database
Two-dimensional shallow water models based on unstructured finite volume method and approximate Riemann solvers for computing the intercell fluxes have drawn growing attention because of their robustness, high adaptivity to complicated geometry and ability to simulate flows with mixed regimes and di...
Imaging Effects of Neurotrophic Factor Genes on Brain Plasticity and Repair in Multiple Sclerosis
2010-07-01
cortical thickness and subcortical volume measures, lesion volumetry , and voxel-based morphometry and diffusion imaging. We are continuing to...th ickness and subcortical volume measures, lesion volumetry , and voxel-based morphometry and diffusion imaging. Regressio n and symbolic modeling
Information-driven trade and price-volume relationship in artificial stock markets
NASA Astrophysics Data System (ADS)
Liu, Xinghua; Liu, Xin; Liang, Xiaobei
2015-07-01
The positive relation between stock price changes and trading volume (price-volume relationship) as a stylized fact has attracted significant interest among finance researchers and investment practitioners. However, until now, consensus has not been reached regarding the causes of the relationship based on real market data because extracting valuable variables (such as information-driven trade volume) from real data is difficult. This lack of general consensus motivates us to develop a simple agent-based computational artificial stock market where extracting the necessary variables is easy. Based on this model and its artificial data, our tests have found that the aggressive trading style of informed agents can produce a price-volume relationship. Therefore, the information spreading process is not a necessary condition for producing price-volume relationship.
Polarized BRDF for coatings based on three-component assumption
NASA Astrophysics Data System (ADS)
Liu, Hong; Zhu, Jingping; Wang, Kai; Xu, Rong
2017-02-01
A pBRDF(polarized bidirectional reflection distribution function) model for coatings is given based on three-component reflection assumption in order to improve the polarized scattering simulation capability for space objects. In this model, the specular reflection is given based on microfacet theory, the multiple reflection and volume scattering are given separately according to experimental results. The polarization of specular reflection is considered from Fresnel's law, and both multiple reflection and volume scattering are assumed depolarized. Simulation and measurement results of two satellite coating samples SR107 and S781 are given to validate that the pBRDF modeling accuracy can be significantly improved by the three-component model given in this paper.
NASA Astrophysics Data System (ADS)
Maack, Joachim; Lingenfelder, Marcus; Weinacker, Holger; Koch, Barbara
2016-07-01
Remote sensing-based timber volume estimation is key for modelling the regional potential, accessibility and price of lignocellulosic raw material for an emerging bioeconomy. We used a unique wall-to-wall airborne LiDAR dataset and Landsat 7 satellite images in combination with terrestrial inventory data derived from the National Forest Inventory (NFI), and applied generalized additive models (GAM) to estimate spatially explicit timber distribution and volume in forested areas. Since the NFI data showed an underlying structure regarding size and ownership, we additionally constructed a socio-economic predictor to enhance the accuracy of the analysis. Furthermore, we balanced the training dataset with a bootstrap method to achieve unbiased regression weights for interpolating timber volume. Finally, we compared and discussed the model performance of the original approach (r2 = 0.56, NRMSE = 9.65%), the approach with balanced training data (r2 = 0.69, NRMSE = 12.43%) and the final approach with balanced training data and the additional socio-economic predictor (r2 = 0.72, NRMSE = 12.17%). The results demonstrate the usefulness of remote sensing techniques for mapping timber volume for a future lignocellulose-based bioeconomy.
NASA Astrophysics Data System (ADS)
Lu, Haibao; Wang, Xiaodong; Yao, Yongtao; Qing Fu, Yong
2018-06-01
Phenomenological models based on frozen volume parameters could well predict shape recovery behavior of shape memory polymers (SMPs), but the physical meaning of using the frozen volume parameters to describe thermomechanical properties has not been well-established. In this study, the fundamental working mechanisms of the shape memory effect (SME) in amorphous SMPs, whose temperature-dependent viscoelastic behavior follows the Eyring equation, have been established with the considerations of both internal stress and its resulted frozen volume. The stress-strain constitutive relation was initially modeled to quantitatively describe effects of internal stresses at the macromolecular scale based on the transient network theory. A phenomenological ‘frozen volume’ model was then established to characterize the macromolecule structure and SME of amorphous SMPs based on a two-site stress-relaxation model. Effects of the internal stress, frozen volume and strain rate on shape memory behavior and thermomechanical properties of the SMP were investigated. Finally, the simulation results were compared with the experimental results reported in the literature, and good agreements between the theoretical and experimental results were achieved. The novelty and key differences of our newly proposed model with respect to the previous reports are (1). The ‘frozen volume’ in our study is caused by the internal stress and governed by the two-site model theory, thus has a good physical meaning. (2). The model can be applied to characterize and predict both the thermal and thermomechanical behaviors of SMPs based on the constitutive relationship with internal stress parameters. It is expected to provide a power tool to investigate the thermomechanical behavior of the SMPs, of which both the macromolecular structure characteristics and SME could be predicted using this ‘frozen volume’ model.
Compaction-Based Deformable Terrain Model as an Interface for Real-Time Vehicle Dynamics Simulations
2013-04-16
to vehicular loads, and the resulting visco-elastic-plastic stress/strain on the affected soil volume. Pedo transfer functions allow for the...resulting visco-elastic-plastic stress/strain on the affected soil volume. Pedo transfer functions allow for the calculation of the soil mechanics model
Xuan, Ziming; Chaloupka, Frank J; Blanchette, Jason G; Nguyen, Thien H; Heeren, Timothy C; Nelson, Toben F; Naimi, Timothy S
2015-03-01
U.S. studies contribute heavily to the literature about the tax elasticity of demand for alcohol, and most U.S. studies have relied upon specific excise (volume-based) taxes for beer as a proxy for alcohol taxes. The purpose of this paper was to compare this conventional alcohol tax measure with more comprehensive tax measures (incorporating multiple tax and beverage types) in analyses of the relationship between alcohol taxes and adult binge drinking prevalence in U.S. states. Data on U.S. state excise, ad valorem and sales taxes from 2001 to 2010 were obtained from the Alcohol Policy Information System and other sources. For 510 state-year strata, we developed a series of weighted tax-per-drink measures that incorporated various combinations of tax and beverage types, and related these measures to state-level adult binge drinking prevalence data from the Behavioral Risk Factor Surveillance System surveys. In analyses pooled across all years, models using the combined tax measure explained approximately 20% of state binge drinking prevalence, and documented more negative tax elasticity (-0.09, P = 0.02 versus -0.005, P = 0.63) and price elasticity (-1.40, P < 0.01 versus -0.76, P = 0.15) compared with models using only the volume-based tax. In analyses stratified by year, the R-squares for models using the beer combined tax measure were stable across the study period (P = 0.11), while the R-squares for models rely only on volume-based tax declined (P < 0.0). Compared with volume-based tax measures, combined tax measures (i.e. those incorporating volume-based tax and value-based taxes) yield substantial improvement in model fit and find more negative tax elasticity and price elasticity predicting adult binge drinking prevalence in U.S. states. © 2014 Society for the Study of Addiction.
Xuan, Ziming; Chaloupka, Frank J.; Blanchette, Jason G.; Nguyen, Thien H.; Heeren, Timothy C.; Nelson, Toben F.; Naimi, Timothy S.
2015-01-01
Aims U.S. studies contribute heavily to the literature about the tax elasticity of demand for alcohol, and most U.S. studies have relied upon specific excise (volume-based) taxes for beer as a proxy for alcohol taxes. The purpose of this paper was to compare this conventional alcohol tax measure with more comprehensive tax measures (incorporating multiple tax and beverage types) in analyses of the relationship between alcohol taxes and adult binge drinking prevalence in U.S. states. Design Data on U.S. state excise, ad valorem and sales taxes from 2001 to 2010 were obtained from the Alcohol Policy Information System and other sources. For 510 state-year strata, we developed a series of weighted tax-per-drink measures that incorporated various combinations of tax and beverage types, and related these measures to state-level adult binge drinking prevalence data from the Behavioral Risk Factor Surveillance System surveys. Findings In analyses pooled across all years, models using the combined tax measure explained approximately 20% of state binge drinking prevalence, and documented more negative tax elasticity (−0.09, P=0.02 versus −0.005, P=0.63) and price elasticity (−1.40, P<0.01 versus −0.76, P=0.15) compared with models using only the volume-based tax. In analyses stratified by year, the R-squares for models using the beer combined tax measure were stable across the study period (P=0.11), while the R-squares for models rely only on volume-based tax declined (P<0.01). Conclusions Compared with volume-based tax measures, combined tax measures (i.e. those incorporating volume-based tax and value-based taxes) yield substantial improvement in model fit and find more negative tax elasticity and price elasticity predicting adult binge drinking prevalence in U.S. states. PMID:25428795
Gartner, J.E.; Cannon, S.H.; Santi, P.M.; deWolfe, V.G.
2008-01-01
Recently burned basins frequently produce debris flows in response to moderate-to-severe rainfall. Post-fire hazard assessments of debris flows are most useful when they predict the volume of material that may flow out of a burned basin. This study develops a set of empirically-based models that predict potential volumes of wildfire-related debris flows in different regions and geologic settings. The models were developed using data from 53 recently burned basins in Colorado, Utah and California. The volumes of debris flows in these basins were determined by either measuring the volume of material eroded from the channels, or by estimating the amount of material removed from debris retention basins. For each basin, independent variables thought to affect the volume of the debris flow were determined. These variables include measures of basin morphology, basin areas burned at different severities, soil material properties, rock type, and rainfall amounts and intensities for storms triggering debris flows. Using these data, multiple regression analyses were used to create separate predictive models for volumes of debris flows generated by burned basins in six separate regions or settings, including the western U.S., southern California, the Rocky Mountain region, and basins underlain by sedimentary, metamorphic and granitic rocks. An evaluation of these models indicated that the best model (the Western U.S. model) explains 83% of the variability in the volumes of the debris flows, and includes variables that describe the basin area with slopes greater than or equal to 30%, the basin area burned at moderate and high severity, and total storm rainfall. This model was independently validated by comparing volumes of debris flows reported in the literature, to volumes estimated using the model. Eighty-seven percent of the reported volumes were within two residual standard errors of the volumes predicted using the model. This model is an improvement over previous models in that it includes a measure of burn severity and an estimate of modeling errors. The application of this model, in conjunction with models for the probability of debris flows, will enable more complete and rapid assessments of debris flow hazards following wildfire.
Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings
NASA Astrophysics Data System (ADS)
Tsai, F.; Chang, H.; Lin, Y.-W.
2017-08-01
This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.
The Analysis for Energy Consumption of Marine Air Conditioning System Based on VAV and VWV
NASA Astrophysics Data System (ADS)
Xu, Sai Feng; Yang, Xing Lin; Le, Zou Ying
2018-06-01
For ocean-going vessels sailing in different areas on the sea, the change of external environment factors will cause frequent changes in load, traditional ship air-conditioning system is usually designed with a fixed cooling capacity, this design method causes serious waste of resources. A new type of sea-based air conditioning system is proposed in this paper, which uses the sea-based source heat pump system, combined with variable air volume, variable water technology. The multifunctional cabins' dynamic loads for a ship navigating in a typical Eurasian route were calculated based on Simulink. The model can predict changes in full voyage load. Based on the simulation model, the effects of variable air volume and variable water volume on the energy consumption of the air-conditioning system are analyzed. The results show that: When the VAV is coupled with the VWV, the energy saving rate is 23.2%. Therefore, the application of variable air volume and variable water technology to marine air conditioning systems can achieve economical and energy saving advantages.
Modeling surficial sand and gravel deposits
Bliss, J.D.; Page, N.J.
1994-01-01
Mineral-deposit models are an integral part of quantitative mineral-resource assessment. As the focus of mineral-deposit modeling has moved from metals to industrial minerals, procedure has been modified and may be sufficient to model surficial sand and gravel deposits. Sand and gravel models are needed to assess resource-supply analyses for planning future development and renewal of infrastructure. Successful modeling of sand and gravel deposits must address (1) deposit volumes and geometries, (2) sizes of fragments within the deposits, (3) physical characteristics of the material, and (4) chemical composition and chemical reactivity of the material. Several models of sand and gravel volumes and geometries have been prepared and suggest the following: Sand and gravel deposits in alluvial fans have a median volume of 35 million m3. Deposits in all other geologic settings have a median volume of 5.4 million m3, a median area of 120 ha, and a median thickness of 4 m. The area of a sand and gravel deposit can be predicted from volume using a regression model (log [area (ha)] =1.47+0.79 log [volume (million m3)]). In similar fashion, the volume of a sand and gravel deposit can be predicted from area using the regression (log [volume (million m3)]=-1.45+1.07 log [area (ha)]). Classifying deposits by fragment size can be done using models of the percentage of sand, gravel, and silt within deposits. A classification scheme based on fragment size is sufficiently general to be applied anywhere. ?? 1994 Oxford University Press.
The Volume Field Model about Strong Interaction and Weak Interaction
NASA Astrophysics Data System (ADS)
Liu, Rongwu
2016-03-01
For a long time researchers have believed that strong interaction and weak interaction are realized by exchanging intermediate particles. This article proposes a new mechanism as follows: Volume field is a form of material existence in plane space, it takes volume-changing motion in the form of non-continuous motion, volume fields have strong interaction or weak interaction between them by overlapping their volume fields. Based on these concepts, this article further proposes a ``bag model'' of volume field for atomic nucleus, which includes three sub-models of the complex structure of fundamental body (such as quark), the atom-like structure of hadron, and the molecule-like structure of atomic nucleus. This article also proposes a plane space model and formulates a physics model of volume field in the plane space, as well as a model of space-time conversion. The model of space-time conversion suggests that: Point space-time and plane space-time convert each other by means of merging and rupture respectively, the essence of space-time conversion is the mutual transformations of matter and energy respectively; the process of collision of high energy hadrons, the formation of black hole, and the Big Bang of universe are three kinds of space-time conversions.
ERIC Educational Resources Information Center
McDonnell Douglas Astronautics Co. - East, St. Louis, MO.
This is the second volume of a two volume study. The first volume examined the literature to identify authoring aids for developing instructional materials, and to identify information clearing houses for existing materials. The purpose of this volume was to develop a means for assessing the cost versus expected benefits of innovations in…
NASA Astrophysics Data System (ADS)
Burk, Laurel M.; Lee, Yueh Z.; Heathcote, Samuel; Wang, Ko-han; Kim, William Y.; Lu, Jianping; Zhou, Otto
2011-03-01
Current optical imaging techniques can successfully measure tumor load in murine models of lung carcinoma but lack structural detail. We demonstrate that respiratory gated micro-CT imaging of such models gives information about structure and correlates with tumor load measurements by optical methods. Four mice with multifocal, Kras-induced tumors expressing firefly luciferase were imaged against four controls using both optical imaging and respiratory gated micro-CT. CT images of anesthetized animals were acquired with a custom CNT-based system using 30 ms x-ray pulses during peak inspiration; respiration motion was tracked with a pressure sensor beneath each animal's abdomen. Optical imaging based on the Luc+ signal correlating with tumor load was performed on a Xenogen IVIS Kinetix. Micro-CT images were post-processed using Osirix, measuring lung volume with region growing. Diameters of the largest three tumors were measured. Relationships between tumor size, lung volumes, and optical signal were compared. CT images and optical signals were obtained for all animals at two time points. In all lobes of the Kras+ mice in all images, tumors were visible; the smallest to be readily identified measured approximately 300 microns diameter. CT-derived tumor volumes and optical signals related linearly, with r=0.94 for all animals. When derived for only tumor bearing animals, r=0.3. The trend of each individual animal's optical signal tracked correctly based on the CT volumes. Interestingly, lung volumes also correlated positively with optical imaging data and tumor volume burden, suggesting active remodeling.
Improved biovolume estimation of Microcystis aeruginosa colonies: A statistical approach.
Alcántara, I; Piccini, C; Segura, A M; Deus, S; González, C; Martínez de la Escalera, G; Kruk, C
2018-05-27
The Microcystis aeruginosa complex (MAC) clusters many of the most common freshwater and brackish bloom-forming cyanobacteria. In monitoring protocols, biovolume estimation is a common approach to determine MAC colonies biomass and useful for prediction purposes. Biovolume (μm 3 mL -1 ) is calculated multiplying organism abundance (orgL -1 ) by colonial volume (μm 3 org -1 ). Colonial volume is estimated based on geometric shapes and requires accurate measurements of dimensions using optical microscopy. A trade-off between easy-to-measure but low-accuracy simple shapes (e.g. sphere) and time costly but high-accuracy complex shapes (e.g. ellipsoid) volume estimation is posed. Overestimations effects in ecological studies and management decisions associated to harmful blooms are significant due to the large sizes of MAC colonies. In this work, we aimed to increase the precision of MAC biovolume estimations by developing a statistical model based on two easy-to-measure dimensions. We analyzed field data from a wide environmental gradient (800 km) spanning freshwater to estuarine and seawater. We measured length, width and depth from ca. 5700 colonies under an inverted microscope and estimated colonial volume using three different recommended geometrical shapes (sphere, prolate spheroid and ellipsoid). Because of the non-spherical shape of MAC the ellipsoid resulted in the most accurate approximation, whereas the sphere overestimated colonial volume (3-80) especially for large colonies (MLD higher than 300 μm). Ellipsoid requires measuring three dimensions and is time-consuming. Therefore, we constructed different statistical models to predict organisms depth based on length and width. Splitting the data into training (2/3) and test (1/3) sets, all models resulted in low training (1.41-1.44%) and testing average error (1.3-2.0%). The models were also evaluated using three other independent datasets. The multiple linear model was finally selected to calculate MAC volume as an ellipsoid based on length and width. This work contributes to achieve a better estimation of MAC volume applicable to monitoring programs as well as to ecological research. Copyright © 2017. Published by Elsevier B.V.
Sun, Jiashu; Stowers, Chris C.; Boczko, Erik M.
2012-01-01
We report on measurements of the volume growth rate of ten individual budding yeast cells using a recently developed MOSFET-based microfluidic Coulter counter. The MOSFET-based microfluidic Coulter counter is very sensitive, provides signals that are immune from the baseline drift, and can work with cell culture media of complex composition. These desirable features allow us to directly measure the volume growth rate of single cells of Saccharomyces cerevisiae LYH3865 strain budding yeast in YNB culture media over a whole cell cycle. Results indicate that all budding yeast follow a sigmoid volume growth profile with reduced growth rates at the initial stage before the bud emerges and the final stage after the daughter gets mature. Analysis of the data indicates that even though all piecewise linear, Gomperitz, and Hill’s function models can fit the global growth profile equally well, the data strongly support local exponential growth phenomenon. Accurate volume growth measurements are important for applications in systems biology where quantitative parameters are required for modeling and simulation. PMID:20717618
Sun, Jiashu; Stowers, Chris C; Boczko, Erik M; Li, Deyu
2010-11-07
We report on measurements of the volume growth rate of ten individual budding yeast cells using a recently developed MOSFET-based microfluidic Coulter counter. The MOSFET-based microfluidic Coulter counter is very sensitive, provides signals that are immune from the baseline drift, and can work with cell culture media of complex composition. These desirable features allow us to directly measure the volume growth rate of single cells of Saccharomyces cerevisiae LYH3865 strain budding yeast in YNB culture media over a whole cell cycle. Results indicate that all budding yeast follow a sigmoid volume growth profile with reduced growth rates at the initial stage before the bud emerges and the final stage after the daughter gets mature. Analysis of the data indicates that even though all piecewise linear, Gomperitz, and Hill's function models can fit the global growth profile equally well, the data strongly support local exponential growth phenomenon. Accurate volume growth measurements are important for applications in systems biology where quantitative parameters are required for modeling and simulation.
Javed, Faizan; Chan, Gregory S H; Savkin, Andrey V; Middleton, Paul M; Malouf, Philip; Steel, Elizabeth; Mackie, James; Lovell, Nigel H
2009-01-01
This paper uses non-linear support vector regression (SVR) to model the blood volume and heart rate (HR) responses in 9 hemodynamically stable kidney failure patients during hemodialysis. Using radial bias function (RBF) kernels the non-parametric models of relative blood volume (RBV) change with time as well as percentage change in HR with respect to RBV were obtained. The e-insensitivity based loss function was used for SVR modeling. Selection of the design parameters which includes capacity (C), insensitivity region (e) and the RBF kernel parameter (sigma) was made based on a grid search approach and the selected models were cross-validated using the average mean square error (AMSE) calculated from testing data based on a k-fold cross-validation technique. Linear regression was also applied to fit the curves and the AMSE was calculated for comparison with SVR. For the model based on RBV with time, SVR gave a lower AMSE for both training (AMSE=1.5) as well as testing data (AMSE=1.4) compared to linear regression (AMSE=1.8 and 1.5). SVR also provided a better fit for HR with RBV for both training as well as testing data (AMSE=15.8 and 16.4) compared to linear regression (AMSE=25.2 and 20.1).
1993-06-03
personal communication , Institute of Marine Research, Bergen, Norway (1990). fishery sense although they could be major contributors to 7"Report of the...volume scattering strength data with model calculations based or, Program Element No. quasisynoptically collected fishery data Pjfect No. 6. Author(s...and 5000 Hz in the Norwegian Sea in August 1988 and west of Great Britain in April 1989. Coincidentally, extensive fishery surveys were conducted at
Kernel Regression Estimation of Fiber Orientation Mixtures in Diffusion MRI
Cabeen, Ryan P.; Bastin, Mark E.; Laidlaw, David H.
2016-01-01
We present and evaluate a method for kernel regression estimation of fiber orientations and associated volume fractions for diffusion MR tractography and population-based atlas construction in clinical imaging studies of brain white matter. This is a model-based image processing technique in which representative fiber models are estimated from collections of component fiber models in model-valued image data. This extends prior work in nonparametric image processing and multi-compartment processing to provide computational tools for image interpolation, smoothing, and fusion with fiber orientation mixtures. In contrast to related work on multi-compartment processing, this approach is based on directional measures of divergence and includes data-adaptive extensions for model selection and bilateral filtering. This is useful for reconstructing complex anatomical features in clinical datasets analyzed with the ball-and-sticks model, and our framework’s data-adaptive extensions are potentially useful for general multi-compartment image processing. We experimentally evaluate our approach with both synthetic data from computational phantoms and in vivo clinical data from human subjects. With synthetic data experiments, we evaluate performance based on errors in fiber orientation, volume fraction, compartment count, and tractography-based connectivity. With in vivo data experiments, we first show improved scan-rescan reproducibility and reliability of quantitative fiber bundle metrics, including mean length, volume, streamline count, and mean volume fraction. We then demonstrate the creation of a multi-fiber tractography atlas from a population of 80 human subjects. In comparison to single tensor atlasing, our multi-fiber atlas shows more complete features of known fiber bundles and includes reconstructions of the lateral projections of the corpus callosum and complex fronto-parietal connections of the superior longitudinal fasciculus I, II, and III. PMID:26691524
Estimation of the fractional coverage of rainfall in climate models
NASA Technical Reports Server (NTRS)
Eltahir, E. A. B.; Bras, R. L.
1993-01-01
The fraction of the grid cell area covered by rainfall, mu, is an essential parameter in descriptions of land surface hydrology in climate models. A simple procedure is presented for estimating this fraction, based on extensive observations of storm areas and rainfall volumes. Storm area and rainfall volume are often linearly related; this relation can be used to compute the storm area from the volume of rainfall simulated by a climate model. A formula is developed for computing mu, which describes the dependence of the fractional coverage of rainfall on the season of the year, the geographical region, rainfall volume, and the spatial and temporal resolution of the model. The new formula is applied in computing mu over the Amazon region. Significant temporal variability in the fractional coverage of rainfall is demonstrated. The implications of this variability for the modeling of land surface hydrology in climate models are discussed.
Lee, C-C; Ho, H-C; Jack, Lee C-C; Su, Y-C; Lee, M-S; Hung, S-K; Chou, Pesus
2010-02-01
Oral cancer leads to a considerable use of and expenditure on health care. Wide resection of the tumour and reconstruction with a pedicle flap/free flap is widely used. This study was conducted to explore the relationship between hospitalisation costs and surgeon case volume when this operation was performed. A population-based study. This study uses data for the years 2005-2006 obtained from the National Health Insurance Research Database published in the Taiwanese National Health Research Institute. From this population-based data, the authors selected a total of 2663 oral cancer patients who underwent tumour resection and reconstruction. Case volume relationships were based on the following criteria; low-, medium-, high-, very high-volume surgeons were defined by
Lake-level frequency analysis for Devils Lake, North Dakota
Wiche, Gregg J.; Vecchia, Aldo V.
1996-01-01
Two approaches were used to estimate future lake-level probabilities for Devils Lake. The first approach is based on an annual lake-volume model, and the second approach is based on a statistical water mass-balance model that generates seasonal lake volumes on the basis of seasonal precipitation, evaporation, and inflow. Autoregressive moving average models were used to model the annual mean lake volume and the difference between the annual maximum lake volume and the annual mean lake volume. Residuals from both models were determined to be uncorrelated with zero mean and constant variance. However, a nonlinear relation between the residuals of the two models was included in the final annual lakevolume model.Because of high autocorrelation in the annual lake levels of Devils Lake, the annual lake-volume model was verified using annual lake-level changes. The annual lake-volume model closely reproduced the statistics of the recorded lake-level changes for 1901-93 except for the skewness coefficient. However, the model output is less skewed than the data indicate because of some unrealistically large lake-level declines. The statistical water mass-balance model requires as inputs seasonal precipitation, evaporation, and inflow data for Devils Lake. Analysis of annual precipitation, evaporation, and inflow data for 1950-93 revealed no significant trends or long-range dependence so the input time series were assumed to be stationary and short-range dependent.Normality transformations were used to approximately maintain the marginal probability distributions; and a multivariate, periodic autoregressive model was used to reproduce the correlation structure. Each of the coefficients in the model is significantly different from zero at the 5-percent significance level. Coefficients relating spring inflow from one year to spring and fall inflows from the previous year had the largest effect on the lake-level frequency analysis.Inclusion of parameter uncertainty in the model for generating precipitation, evaporation, and inflow indicates that the upper lake-level exceedance levels from the water mass-balance model are particularly sensitive to parameter uncertainty. The sensitivity in the upper exceedance levels was caused almost entirely by uncertainty in the fitted probability distributions of the quarterly inflows. A method was developed for using long-term streamflow data for the Red River of the North at Grand Forks to reduce the variance in the estimated mean.Comparison of the annual lake-volume model and the water mass-balance model indicates the upper exceedance levels of the water mass-balance model increase much more rapidly than those of the annual lake-volume model. As an example, for simulation year 5, the 99-percent exceedance for the lake level is 1,417.6 feet above sea level for the annual lake-volume model and 1,423.2 feet above sea level for the water mass-balance model. The rapid increase is caused largely by the record precipitation and inflow in the summer and fall of 1993. Because the water mass-balance model produces lake-level traces that closely match the hydrology of Devils Lake, the water mass-balance model is superior to the annual lake-volume model for computing exceedance levels for the 50-year planning horizon.
Lake-level frequency analysis for Devils Lake, North Dakota
Wiche, Gregg J.; Vecchia, Aldo V.
1995-01-01
Two approaches were used to estimate future lake-level probabilities for Devils Lake. The first approach is based on an annual lake-volume model, and the second approach is based on a statistical water mass-balance model that generates seasonal lake volumes on the basis of seasonal precipitation, evaporation, and inflow.Autoregressive moving average models were used to model the annual mean lake volume and the difference between the annual maximum lake volume and the annual mean lake volume. Residuals from both models were determined to be uncorrelated with zero mean and constant variance. However, a nonlinear relation between the residuals of the two models was included in the final annual lake-volume model.Because of high autocorrelation in the annual lake levels of Devils Lake, the annual lakevolume model was verified using annual lake-level changes. The annual lake-volume model closely reproduced the statistics of the recorded lake-level changes for 1901-93 except for the skewness coefficient However, the model output is less skewed than the data indicate because of some unrealistically large lake-level declines.The statistical water mass-balance model requires as inputs seasonal precipitation, evaporation, and inflow data for Devils Lake. Analysis of annual precipitation, evaporation, and inflow data for 1950-93 revealed no significant trends or long-range dependence so the input time series were assumed to be stationary and short-range dependent.Normality transformations were used to approximately maintain the marginal probability distributions; and a multivariate, periodic autoregressive model was used to reproduce the correlation structure. Each of the coefficients in the model is significantly different from zero at the 5-percent significance level. Coefficients relating spring inflow from one year to spring and fall inflows from the previous year had the largest effect on the lake-level frequency analysis.Inclusion of parameter uncertainty in the model for generating precipitation, evaporation, and inflow indicates that the upper lake-level exceedance levels from the water mass-balance model are particularly sensitive to parameter uncertainty. The sensitivity in the upper exceedance levels was caused almost entirely by uncertainty in the fitted probability distributions of the quarterly inflows. A method was developed for using long-term streamflow data for the Red River of the North at Grand Forks to reduce the variance in the estimated mean. Comparison of the annual lake-volume model and the water mass-balance model indicates the upper exceedance levels of the water mass-balance model increase much more rapidly than those of the annual lake-volume model. As an example, for simulation year 5, the 99-percent exceedance for the lake level is 1,417.6 feet above sea level for the annual lake-volume model and 1,423.2 feet above sea level for the water mass-balance model. The rapid increase is caused largely by the record precipitation and inflow in the summer and fall of 1993. Because the water mass-balance model produces lake-level traces that closely match the hydrology of Devils Lake, the water mass-balance model is superior to the annual lake-volume model for computing exceedance levels for the 50-year planning horizon.
Determining blood and plasma volumes using bioelectrical response spectroscopy
NASA Technical Reports Server (NTRS)
Siconolfi, S. F.; Nusynowitz, M. L.; Suire, S. S.; Moore, A. D. Jr; Leig, J.
1996-01-01
We hypothesized that an electric field (inductance) produced by charged blood components passing through the many branches of arteries and veins could assess total blood volume (TBV) or plasma volume (PV). Individual (N = 29) electrical circuits (inductors, two resistors, and a capacitor) were determined from bioelectrical response spectroscopy (BERS) using a Hewlett Packard 4284A Precision LCR Meter. Inductance, capacitance, and resistance from the circuits of 19 subjects modeled TBV (sum of PV and computed red cell volume) and PV (based on 125I-albumin). Each model (N = 10, cross validation group) had good validity based on 1) mean differences (-2.3 to 1.5%) between the methods that were not significant and less than the propagated errors (+/- 5.2% for TBV and PV), 2) high correlations (r > 0.92) with low SEE (< 7.7%) between dilution and BERS assessments, and 3) Bland-Altman pairwise comparisons that indicated "clinical equivalency" between the methods. Given the limitation of this study (10 validity subjects), we concluded that BERS models accurately assessed TBV and PV. Further evaluations of the models' validities are needed before they are used in clinical or research settings.
Mohammed, Emad A; Naugler, Christopher
2017-01-01
Demand forecasting is the area of predictive analytics devoted to predicting future volumes of services or consumables. Fair understanding and estimation of how demand will vary facilitates the optimal utilization of resources. In a medical laboratory, accurate forecasting of future demand, that is, test volumes, can increase efficiency and facilitate long-term laboratory planning. Importantly, in an era of utilization management initiatives, accurately predicted volumes compared to the realized test volumes can form a precise way to evaluate utilization management initiatives. Laboratory test volumes are often highly amenable to forecasting by time-series models; however, the statistical software needed to do this is generally either expensive or highly technical. In this paper, we describe an open-source web-based software tool for time-series forecasting and explain how to use it as a demand forecasting tool in clinical laboratories to estimate test volumes. This tool has three different models, that is, Holt-Winters multiplicative, Holt-Winters additive, and simple linear regression. Moreover, these models are ranked and the best one is highlighted. This tool will allow anyone with historic test volume data to model future demand.
Mohammed, Emad A.; Naugler, Christopher
2017-01-01
Background: Demand forecasting is the area of predictive analytics devoted to predicting future volumes of services or consumables. Fair understanding and estimation of how demand will vary facilitates the optimal utilization of resources. In a medical laboratory, accurate forecasting of future demand, that is, test volumes, can increase efficiency and facilitate long-term laboratory planning. Importantly, in an era of utilization management initiatives, accurately predicted volumes compared to the realized test volumes can form a precise way to evaluate utilization management initiatives. Laboratory test volumes are often highly amenable to forecasting by time-series models; however, the statistical software needed to do this is generally either expensive or highly technical. Method: In this paper, we describe an open-source web-based software tool for time-series forecasting and explain how to use it as a demand forecasting tool in clinical laboratories to estimate test volumes. Results: This tool has three different models, that is, Holt-Winters multiplicative, Holt-Winters additive, and simple linear regression. Moreover, these models are ranked and the best one is highlighted. Conclusion: This tool will allow anyone with historic test volume data to model future demand. PMID:28400996
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-10-01
Volume IV of the ISTUM documentation gives information on the individual technology specifications, but relates closely with Chapter II of Volume I. The emphasis in that chapter is on providing an overview of where each technology fits into the general-model logic. Volume IV presents the actual cost structure and specification of every technology modeled in ISTUM. The first chapter presents a general overview of the ISTUM technology data base. It includes an explanation of the data base printouts and how the separate-cost building blocks are combined to derive an aggregate-technology cost. The remaining chapters are devoted to documenting the specific-technologymore » cost specifications. Technologies included are: conventional technologies (boiler and non-boiler conventional technologies); fossil-energy technologies (atmospheric fluidized bed combustion, low Btu coal and medium Btu coal gasification); cogeneration (steam, machine drive, and electrolytic service sectors); and solar and geothermal technologies (solar steam, solar space heat, and geothermal steam technologies), and conservation technologies.« less
Chiu, Peter K F; Roobol, Monique J; Teoh, Jeremy Y; Lee, Wai-Man; Yip, Siu-Ying; Hou, See-Ming; Bangma, Chris H; Ng, Chi-Fai
2016-10-01
To investigate PSA- and PHI (prostate health index)-based models for prediction of prostate cancer (PCa) and the feasibility of using DRE-estimated prostate volume (DRE-PV) in the models. This study included 569 Chinese men with PSA 4-10 ng/mL and non-suspicious DRE with transrectal ultrasound (TRUS) 10-core prostate biopsies performed between April 2008 and July 2015. DRE-PV was estimated using 3 pre-defined classes: 25, 40, or 60 ml. The performance of PSA-based and PHI-based predictive models including age, DRE-PV, and TRUS prostate volume (TRUS-PV) was analyzed using logistic regression and area under the receiver operating curves (AUC), in both the whole cohort and the screening age group of 55-75. PCa and high-grade PCa (HGPCa) was diagnosed in 10.9 % (62/569) and 2.8 % (16/569) men, respectively. The performance of DRE-PV-based models was similar to TRUS-PV-based models. In the age group 55-75, the AUCs for PCa of PSA alone, PSA with DRE-PV and age, PHI alone, PHI with DRE-PV and age, and PHI with TRUS-PV and age were 0.54, 0.71, 0.76, 0.78, and 0.78, respectively. The corresponding AUCs for HGPCa were higher (0.60, 0.70, 0.85, 0.83, and 0.83). At 10 and 20 % risk threshold for PCa, 38.4 and 55.4 % biopsies could be avoided in the PHI-based model, respectively. PHI had better performance over PSA-based models and could reduce unnecessary biopsies. A DRE-assessed PV can replace TRUS-assessed PV in multivariate prediction models to facilitate clinical use.
Haufe, Stefan; Huang, Yu; Parra, Lucas C
2015-08-01
In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.
Multivariate Statistical Models for Predicting Sediment Yields from Southern California Watersheds
Gartner, Joseph E.; Cannon, Susan H.; Helsel, Dennis R.; Bandurraga, Mark
2009-01-01
Debris-retention basins in Southern California are frequently used to protect communities and infrastructure from the hazards of flooding and debris flow. Empirical models that predict sediment yields are used to determine the size of the basins. Such models have been developed using analyses of records of the amount of material removed from debris retention basins, associated rainfall amounts, measures of watershed characteristics, and wildfire extent and history. In this study we used multiple linear regression methods to develop two updated empirical models to predict sediment yields for watersheds located in Southern California. The models are based on both new and existing measures of volume of sediment removed from debris retention basins, measures of watershed morphology, and characterization of burn severity distributions for watersheds located in Ventura, Los Angeles, and San Bernardino Counties. The first model presented reflects conditions in watersheds located throughout the Transverse Ranges of Southern California and is based on volumes of sediment measured following single storm events with known rainfall conditions. The second model presented is specific to conditions in Ventura County watersheds and was developed using volumes of sediment measured following multiple storm events. To relate sediment volumes to triggering storm rainfall, a rainfall threshold was developed to identify storms likely to have caused sediment deposition. A measured volume of sediment deposited by numerous storms was parsed among the threshold-exceeding storms based on relative storm rainfall totals. The predictive strength of the two models developed here, and of previously-published models, was evaluated using a test dataset consisting of 65 volumes of sediment yields measured in Southern California. The evaluation indicated that the model developed using information from single storm events in the Transverse Ranges best predicted sediment yields for watersheds in San Bernardino, Los Angeles, and Ventura Counties. This model predicts sediment yield as a function of the peak 1-hour rainfall, the watershed area burned by the most recent fire (at all severities), the time since the most recent fire, watershed area, average gradient, and relief ratio. The model that reflects conditions specific to Ventura County watersheds consistently under-predicted sediment yields and is not recommended for application. Some previously-published models performed reasonably well, while others either under-predicted sediment yields or had a larger range of errors in the predicted sediment yields.
[Compatible biomass models of natural spruce (Picea asperata)].
Wang, Jin Chi; Deng, Hua Feng; Huang, Guo Sheng; Wang, Xue Jun; Zhang, Lu
2017-10-01
By using nonlinear measurement error method, the compatible tree volume and above ground biomass equations were established based on the volume and biomass data of 150 sampling trees of natural spruce (Picea asperata). Two approaches, controlling directly under total aboveground biomass and controlling jointly from level to level, were used to design the compatible system for the total aboveground biomass and the biomass of four components (stem, bark, branch and foliage), and the total ground biomass could be estimated independently or estimated simultaneously in the system. The results showed that the R 2 of the one variable and bivariate compatible tree volume and aboveground biomass equations were all above 0.85, and the maximum value reached 0.99. The prediction effect of the volume equations could be improved significantly when tree height was included as predictor, while it was not significant in biomass estimation. For the compatible biomass systems, the one variable model based on controlling jointly from level to level was better than the model using controlling directly under total above ground biomass, but the bivariate models of the two methods were similar. Comparing the imitative effects of the one variable and bivariate compatible biomass models, the results showed that the increase of explainable variables could significantly improve the fitness of branch and foliage biomass, but had little effect on other components. Besides, there was almost no difference between the two methods of estimation based on the comparison.
Evaluation of procedures for prediction of unconventional gas in the presence of geologic trends
Attanasi, E.D.; Coburn, T.C.
2009-01-01
This study extends the application of local spatial nonparametric prediction models to the estimation of recoverable gas volumes in continuous-type gas plays to regimes where there is a single geologic trend. A transformation is presented, originally proposed by Tomczak, that offsets the distortions caused by the trend. This article reports on numerical experiments that compare predictive and classification performance of the local nonparametric prediction models based on the transformation with models based on Euclidean distance. The transformation offers improvement in average root mean square error when the trend is not severely misspecified. Because of the local nature of the models, even those based on Euclidean distance in the presence of trends are reasonably robust. The tests based on other model performance metrics such as prediction error associated with the high-grade tracts and the ability of the models to identify sites with the largest gas volumes also demonstrate the robustness of both local modeling approaches. ?? International Association for Mathematical Geology 2009.
Yu, Tsung-Hsien; Tung, Yu-Chi; Chung, Kuo-Piao
2015-01-01
Background Volume-infection relationships have been examined for high-risk surgical procedures, but the conclusions remain controversial. The inconsistency might be due to inaccurate identification of cases of infection and different methods of categorizing service volumes. This study takes coronary artery bypass graft (CABG) surgical site infections (SSIs) as an example to examine whether a relationship exists between operation volumes and SSIs, when different SSIs case identification, definitions and categorization methods of operation volumes were implemented. Methods A population-based cross-sectional multilevel study was conducted. A total of 7,007 patients who received CABG surgery between 2006 and 2008 from19 medical centers in Taiwan were recruited. SSIs associated with CABG surgery were identified using International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9 CM) codes and a Classification and Regression Trees (CART) model. Two definitions of surgeon and hospital operation volumes were used: (1) the cumulative CABG operation volumes within the study period; and (2) the cumulative CABG operation volumes in the previous one year before each CABG surgery. Operation volumes were further treated in three different ways: (1) a continuous variable; (2) a categorical variable based on the quartile; and (3) a data-driven categorical variable based on k-means clustering algorithm. Furthermore, subgroup analysis for comorbidities was also conducted. Results This study showed that hospital volumes were not significantly associated with SSIs, no matter which definitions or categorization methods of operation volume, or SSIs case identification approaches were used. On the contrary, the relationships between surgeon’s volumes varied. Most of the models demonstrated that the low-volume surgeons had higher risk than high-volume surgeons. Conclusion Surgeon volumes were more important than hospital volumes in exploring the relationship between CABG operation volumes and SSIs in Taiwan. However, the relationships were not robust. Definitions and categorization methods of operation volume and correct identification of SSIs are important issues for future research. PMID:26053035
Mondoñedo, Jarred R; Suki, Béla
2017-02-01
Lung volume reduction surgery (LVRS) and bronchoscopic lung volume reduction (bLVR) are palliative treatments aimed at reducing hyperinflation in advanced emphysema. Previous work has evaluated functional improvements and survival advantage for these techniques, although their effects on the micromechanical environment in the lung have yet to be determined. Here, we introduce a computational model to simulate a force-based destruction of elastic networks representing emphysema progression, which we use to track the response to lung volume reduction via LVRS and bLVR. We find that (1) LVRS efficacy can be predicted based on pre-surgical network structure; (2) macroscopic functional improvements following bLVR are related to microscopic changes in mechanical force heterogeneity; and (3) both techniques improve aspects of survival and quality of life influenced by lung compliance, albeit while accelerating disease progression. Our model predictions yield unique insights into the microscopic origins underlying emphysema progression before and after lung volume reduction.
Mondoñedo, Jarred R.
2017-01-01
Lung volume reduction surgery (LVRS) and bronchoscopic lung volume reduction (bLVR) are palliative treatments aimed at reducing hyperinflation in advanced emphysema. Previous work has evaluated functional improvements and survival advantage for these techniques, although their effects on the micromechanical environment in the lung have yet to be determined. Here, we introduce a computational model to simulate a force-based destruction of elastic networks representing emphysema progression, which we use to track the response to lung volume reduction via LVRS and bLVR. We find that (1) LVRS efficacy can be predicted based on pre-surgical network structure; (2) macroscopic functional improvements following bLVR are related to microscopic changes in mechanical force heterogeneity; and (3) both techniques improve aspects of survival and quality of life influenced by lung compliance, albeit while accelerating disease progression. Our model predictions yield unique insights into the microscopic origins underlying emphysema progression before and after lung volume reduction. PMID:28182686
Roshani, G H; Karami, A; Khazaei, A; Olfateh, A; Nazemi, E; Omidi, M
2018-05-17
Gamma ray source has very important role in precision of multi-phase flow metering. In this study, different combination of gamma ray sources (( 133 Ba- 137 Cs), ( 133 Ba- 60 Co), ( 241 Am- 137 Cs), ( 241 Am- 60 Co), ( 133 Ba- 241 Am) and ( 60 Co- 137 Cs)) were investigated in order to optimize the three-phase flow meter. Three phases were water, oil and gas and the regime was considered annular. The required data was numerically generated using MCNP-X code which is a Monte-Carlo code. Indeed, the present study devotes to forecast the volume fractions in the annular three-phase flow, based on a multi energy metering system including various radiation sources and also one NaI detector, using a hybrid model of artificial neural network and Jaya Optimization algorithm. Since the summation of volume fractions is constant, a constraint modeling problem exists, meaning that the hybrid model must forecast only two volume fractions. Six hybrid models associated with the number of used radiation sources are designed. The models are employed to forecast the gas and water volume fractions. The next step is to train the hybrid models based on numerically obtained data. The results show that, the best forecast results are obtained for the gas and water volume fractions of the system including the ( 241 Am- 137 Cs) as the radiation source. Copyright © 2018 Elsevier Ltd. All rights reserved.
Campbell, K B; Shroff, S G; Kirkpatrick, R D
1991-06-01
Based on the premise that short-time-scale, small-amplitude pressure/volume/outflow behavior of the left ventricular chamber was dominated by dynamic processes originating in cardiac myofilaments, a prototype model was built to predict pressure responses to volume perturbations. In the model, chamber pressure was taken to be the product of the number of generators in a pressure-bearing state and their average volumetric distortion, as in the muscle theory of A.F. Huxley, in which force was equal to the number of attached crossbridges and their average lineal distortion. Further, as in the muscle theory, pressure generators were assumed to cycle between two states, the pressure-bearing state and the non-pressure-bearing state. Experiments were performed in the isolated ferret heart, where variable volume decrements (0.01-0.12 ml) were removed at two commanded flow rates (flow clamps, -7 and -14 ml/sec). Pressure responses to volume removals were analyzed. Although the prototype model accounted for most features of the pressure responses, subtle but systematic discrepancies were observed. The presence or absence of flow and the magnitude of flow affected estimates of model parameters. However, estimates of parameters did not differ when the model was fitted to flow clamps with similar magnitudes of flows but different volume changes. Thus, prototype model inadequacies were attributed to misrepresentations of flow-related effects but not of volume-related effects. Based on these discrepancies, an improved model was built that added to the simple two-state cycling scheme, a pathway to a third state. This path was followed only in response to volume change. The improved model eliminated the deficiencies of the prototype model and was adequate in accounting for all observations. Since the template for the improved model was taken from the cycling crossbridge theory of muscle contraction, it was concluded that, in spite of the complexities of geometry, architecture, and regional heterogeneity of function and structure, crossbridge mechanisms dominated the short-time-scale dynamics of left ventricular chamber behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cella, Laura, E-mail: laura.cella@cnr.it; Department of Advanced Biomedical Sciences, Federico II University School of Medicine, Naples; Liuzzi, Raffaele
Purpose: To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced asymptomatic heart valvular defects (RVD). Methods and Materials: Fifty-six patients treated with sequential chemoradiation therapy for Hodgkin lymphoma (HL) were retrospectively reviewed for RVD events. Clinical information along with whole heart, cardiac chambers, and lung dose distribution parameters was collected, and the correlations to RVD were analyzed by means of Spearman's rank correlation coefficient (Rs). For the selection of the model order and parameters for NTCP modeling, a multivariate logistic regression method using resampling techniques (bootstrapping) was applied. Model performance was evaluated using the area under themore » receiver operating characteristic curve (AUC). Results: When we analyzed the whole heart, a 3-variable NTCP model including the maximum dose, whole heart volume, and lung volume was shown to be the optimal predictive model for RVD (Rs = 0.573, P<.001, AUC = 0.83). When we analyzed the cardiac chambers individually, for the left atrium and for the left ventricle, an NTCP model based on 3 variables including the percentage volume exceeding 30 Gy (V30), cardiac chamber volume, and lung volume was selected as the most predictive model (Rs = 0.539, P<.001, AUC = 0.83; and Rs = 0.557, P<.001, AUC = 0.82, respectively). The NTCP values increase as heart maximum dose or cardiac chambers V30 increase. They also increase with larger volumes of the heart or cardiac chambers and decrease when lung volume is larger. Conclusions: We propose logistic NTCP models for RVD considering not only heart irradiation dose but also the combined effects of lung and heart volumes. Our study establishes the statistical evidence of the indirect effect of lung size on radio-induced heart toxicity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamivand, Mahmood; Yang, Ying; Busby, Jeremy T.
The current work combines the Cluster Dynamics (CD) technique and CALPHAD-based precipitation modeling to address the second phase precipitation in cold-worked (CW) 316 stainless steels (SS) under irradiation at 300–400 °C. CD provides the radiation enhanced diffusion and dislocation evolution as inputs for the precipitation model. The CALPHAD-based precipitation model treats the nucleation, growth and coarsening of precipitation processes based on classical nucleation theory and evolution equations, and simulates the composition, size and size distribution of precipitate phases. We benchmark the model against available experimental data at fast reactor conditions (9.4 × 10 –7 dpa/s and 390 °C) and thenmore » use the model to predict the phase instability of CW 316 SS under light water reactor (LWR) extended life conditions (7 × 10 –8 dpa/s and 275 °C). The model accurately predicts the γ' (Ni 3Si) precipitation evolution under fast reactor conditions and that the formation of this phase is dominated by radiation enhanced segregation. The model also predicts a carbide volume fraction that agrees well with available experimental data from a PWR reactor but is much higher than the volume fraction observed in fast reactors. We propose that radiation enhanced dissolution and/or carbon depletion at sinks that occurs at high flux could be the main sources of this inconsistency. The integrated model predicts ~1.2% volume fraction for carbide and ~3.0% volume fraction for γ' for typical CW 316 SS (with 0.054 wt% carbon) under LWR extended life conditions. Finally, this work provides valuable insights into the magnitudes and mechanisms of precipitation in irradiated CW 316 SS for nuclear applications.« less
Mamivand, Mahmood; Yang, Ying; Busby, Jeremy T.; ...
2017-03-11
The current work combines the Cluster Dynamics (CD) technique and CALPHAD-based precipitation modeling to address the second phase precipitation in cold-worked (CW) 316 stainless steels (SS) under irradiation at 300–400 °C. CD provides the radiation enhanced diffusion and dislocation evolution as inputs for the precipitation model. The CALPHAD-based precipitation model treats the nucleation, growth and coarsening of precipitation processes based on classical nucleation theory and evolution equations, and simulates the composition, size and size distribution of precipitate phases. We benchmark the model against available experimental data at fast reactor conditions (9.4 × 10 –7 dpa/s and 390 °C) and thenmore » use the model to predict the phase instability of CW 316 SS under light water reactor (LWR) extended life conditions (7 × 10 –8 dpa/s and 275 °C). The model accurately predicts the γ' (Ni 3Si) precipitation evolution under fast reactor conditions and that the formation of this phase is dominated by radiation enhanced segregation. The model also predicts a carbide volume fraction that agrees well with available experimental data from a PWR reactor but is much higher than the volume fraction observed in fast reactors. We propose that radiation enhanced dissolution and/or carbon depletion at sinks that occurs at high flux could be the main sources of this inconsistency. The integrated model predicts ~1.2% volume fraction for carbide and ~3.0% volume fraction for γ' for typical CW 316 SS (with 0.054 wt% carbon) under LWR extended life conditions. Finally, this work provides valuable insights into the magnitudes and mechanisms of precipitation in irradiated CW 316 SS for nuclear applications.« less
Knowledge-based segmentation of pediatric kidneys in CT for measuring parenchymal volume
NASA Astrophysics Data System (ADS)
Brown, Matthew S.; Feng, Waldo C.; Hall, Theodore R.; McNitt-Gray, Michael F.; Churchill, Bernard M.
2000-06-01
The purpose of this work was to develop an automated method for segmenting pediatric kidneys in contrast-enhanced helical CT images and measuring the volume of the renal parenchyma. An automated system was developed to segment the abdomen, spine, aorta and kidneys. The expected size, shape, topology an X-ray attenuation of anatomical structures are stored as features in an anatomical model. These features guide 3-D threshold-based segmentation and then matching of extracted image regions to anatomical structures in the model. Following segmentation, the kidney volumes are calculated by summing included voxels. To validate the system, the kidney volumes of 4 swine were calculated using our approach and compared to the 'true' volumes measured after harvesting the kidneys. Automated volume calculations were also performed retrospectively in a cohort of 10 children. The mean difference between the calculated and measured values in the swine kidneys was 1.38 (S.D. plus or minus 0.44) cc. For the pediatric cases, calculated volumes ranged from 41.7 - 252.1 cc/kidney, and the mean ratio of right to left kidney volume was 0.96 (S.D. plus or minus 0.07). These results demonstrate the accuracy of the volumetric technique that may in the future provide an objective assessment of renal damage.
Prediction of resource volumes at untested locations using simple local prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2006-01-01
This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.
Modeling of turbulent transport as a volume process
NASA Technical Reports Server (NTRS)
Jennings, Mark J.; Morel, Thomas
1987-01-01
An alternative type of modeling was proposed for the turbulent transport terms in Reynolds-averaged equations. One particular implementation of the model was considered, based on the two-point velocity correlations. The model was found to reproduce the trends but not the magnitude of the nonisotropic behavior of the turbulent transport. Some interesting insights were developed concerning the shape of the contracted two-point correlation volume. This volume is strongly deformed by mean shear from the spherical shape found in unstrained flows. Of particular interest is the finding that the shape is sharply waisted, indicating preferential lines of communication, which should have a direct effect on turbulent transfer and on other processes.
Mechanistic simulation of normal-tissue damage in radiotherapy—implications for dose-volume analyses
NASA Astrophysics Data System (ADS)
Rutkowska, Eva; Baker, Colin; Nahum, Alan
2010-04-01
A radiobiologically based 3D model of normal tissue has been developed in which complications are generated when 'irradiated'. The aim is to provide insight into the connection between dose-distribution characteristics, different organ architectures and complication rates beyond that obtainable with simple DVH-based analytical NTCP models. In this model the organ consists of a large number of functional subunits (FSUs), populated by stem cells which are killed according to the LQ model. A complication is triggered if the density of FSUs in any 'critical functioning volume' (CFV) falls below some threshold. The (fractional) CFV determines the organ architecture and can be varied continuously from small (series-like behaviour) to large (parallel-like). A key feature of the model is its ability to account for the spatial dependence of dose distributions. Simulations were carried out to investigate correlations between dose-volume parameters and the incidence of 'complications' using different pseudo-clinical dose distributions. Correlations between dose-volume parameters and outcome depended on characteristics of the dose distributions and on organ architecture. As anticipated, the mean dose and V20 correlated most strongly with outcome for a parallel organ, and the maximum dose for a serial organ. Interestingly better correlation was obtained between the 3D computer model and the LKB model with dose distributions typical for serial organs than with those typical for parallel organs. This work links the results of dose-volume analyses to dataset characteristics typical for serial and parallel organs and it may help investigators interpret the results from clinical studies.
NASA Astrophysics Data System (ADS)
Jin, Dakai; Lu, Jia; Zhang, Xiaoliu; Chen, Cheng; Bai, ErWei; Saha, Punam K.
2017-03-01
Osteoporosis is associated with increased fracture risk. Recent advancement in the area of in vivo imaging allows segmentation of trabecular bone (TB) microstructures, which is a known key determinant of bone strength and fracture risk. An accurate biomechanical modelling of TB micro-architecture provides a comprehensive summary measure of bone strength and fracture risk. In this paper, a new direct TB biomechanical modelling method using nonlinear manifold-based volumetric reconstruction of trabecular network is presented. It is accomplished in two sequential modules. The first module reconstructs a nonlinear manifold-based volumetric representation of TB networks from three-dimensional digital images. Specifically, it starts with the fuzzy digital segmentation of a TB network, and computes its surface and curve skeletons. An individual trabecula is identified as a topological segment in the curve skeleton. Using geometric analysis, smoothing and optimization techniques, the algorithm generates smooth, curved, and continuous representations of individual trabeculae glued at their junctions. Also, the method generates a geometrically consistent TB volume at junctions. In the second module, a direct computational biomechanical stress-strain analysis is applied on the reconstructed TB volume to predict mechanical measures. The accuracy of the method was examined using micro-CT imaging of cadaveric distal tibia specimens (N = 12). A high linear correlation (r = 0.95) between TB volume computed using the new manifold-modelling algorithm and that directly derived from the voxel-based micro-CT images was observed. Young's modulus (YM) was computed using direct mechanical analysis on the TB manifold-model over a cubical volume of interest (VOI), and its correlation with the YM, computed using micro-CT based conventional finite-element analysis over the same VOI, was examined. A moderate linear correlation (r = 0.77) was observed between the two YM measures. This preliminary results show the accuracy of the new nonlinear manifold modelling algorithm for TB, and demonstrate the feasibility of a new direct mechanical strain-strain analysis on a nonlinear manifold model of a highly complex biological structure.
A consensus-based dynamics for market volumes
NASA Astrophysics Data System (ADS)
Sabatelli, Lorenzo; Richmond, Peter
2004-12-01
We develop a model of trading orders based on opinion dynamics. The agents may be thought as the share holders of a major mutual fund rather than as direct traders. The balance between their buy and sell orders determines the size of the fund order (volume) and has an impact on prices and indexes. We assume agents interact simultaneously to each other through a Sznajd-like interaction. Their degree of connection is determined by the probability of changing opinion independently of what their neighbours are doing. We assume that such a probability may change randomly, after each transaction, of an amount proportional to the relative difference between the volatility then measured and a benchmark that we assume to be an exponential moving average of the past volume values. We show how this simple model is compatible with some of the main statistical features observed for the asset volumes in financial markets.
Numerical Cerebrospinal System Modeling in Fluid-Structure Interaction.
Garnotel, Simon; Salmon, Stéphanie; Balédent, Olivier
2018-01-01
Cerebrospinal fluid (CSF) stroke volume in the aqueduct is widely used to evaluate CSF dynamics disorders. In a healthy population, aqueduct stroke volume represents around 10% of the spinal stroke volume while intracranial subarachnoid space stroke volume represents 90%. The amplitude of the CSF oscillations through the different compartments of the cerebrospinal system is a function of the geometry and the compliances of each compartment, but we suspect that it could also be impacted be the cardiac cycle frequency. To study this CSF distribution, we have developed a numerical model of the cerebrospinal system taking into account cerebral ventricles, intracranial subarachnoid spaces, spinal canal and brain tissue in fluid-structure interactions. A numerical fluid-structure interaction model is implemented using a finite-element method library to model the cerebrospinal system and its interaction with the brain based on fluid mechanics equations and linear elasticity equations coupled in a monolithic formulation. The model geometry, simplified in a first approach, is designed in accordance with realistic volume ratios of the different compartments: a thin tube is used to mimic the high flow resistance of the aqueduct. CSF velocity and pressure and brain displacements are obtained as simulation results, and CSF flow and stroke volume are calculated from these results. Simulation results show a significant variability of aqueduct stroke volume and intracranial subarachnoid space stroke volume in the physiological range of cardiac frequencies. Fluid-structure interactions are numerous in the cerebrospinal system and difficult to understand in the rigid skull. The presented model highlights significant variations of stroke volumes under cardiac frequency variations only.
Fananapazir, Ghaneh; Benzl, Robert; Corwin, Michael T; Chen, Ling-Xin; Sageshima, Junichiro; Stewart, Susan L; Troppmann, Christoph
2018-07-01
Purpose To determine whether the predonation computed tomography (CT)-based volume of the future remnant kidney is predictive of postdonation renal function in living kidney donors. Materials and Methods This institutional review board-approved, retrospective, HIPAA-compliant study included 126 live kidney donors who had undergone predonation renal CT between January 2007 and December 2014 as well as 2-year postdonation measurement of estimated glomerular filtration rate (eGFR). The whole kidney volume and cortical volume of the future remnant kidney were measured and standardized for body surface area (BSA). Bivariate linear associations between the ratios of whole kidney volume to BSA and cortical volume to BSA were obtained. A linear regression model for 2-year postdonation eGFR that incorporated donor age, sex, and either whole kidney volume-to-BSA ratio or cortical volume-to-BSA ratio was created, and the coefficient of determination (R 2 ) for the model was calculated. Factors not statistically additive in assessing 2-year eGFR were removed by using backward elimination, and the coefficient of determination for this parsimonious model was calculated. Results Correlation was slightly better for cortical volume-to-BSA ratio than for whole kidney volume-to-BSA ratio (r = 0.48 vs r = 0.44, respectively). The linear regression model incorporating all donor factors had an R 2 of 0.66. The only factors that were significantly additive to the equation were cortical volume-to-BSA ratio and predonation eGFR (P = .01 and P < .01, respectively), and the final parsimonious linear regression model incorporating these two variables explained almost the same amount of variance (R 2 = 0.65) as did the full model. Conclusion The cortical volume of the future remnant kidney helped predict postdonation eGFR at 2 years. The cortical volume-to-BSA ratio should thus be considered for addition as an important variable to living kidney donor evaluation and selection guidelines. © RSNA, 2018.
A Novel Modelling Approach for Predicting Forest Growth and Yield under Climate Change.
Ashraf, M Irfan; Meng, Fan-Rui; Bourque, Charles P-A; MacLean, David A
2015-01-01
Global climate is changing due to increasing anthropogenic emissions of greenhouse gases. Forest managers need growth and yield models that can be used to predict future forest dynamics during the transition period of present-day forests under a changing climatic regime. In this study, we developed a forest growth and yield model that can be used to predict individual-tree growth under current and projected future climatic conditions. The model was constructed by integrating historical tree growth records with predictions from an ecological process-based model using neural networks. The new model predicts basal area (BA) and volume growth for individual trees in pure or mixed species forests. For model development, tree-growth data under current climatic conditions were obtained using over 3000 permanent sample plots from the Province of Nova Scotia, Canada. Data to reflect tree growth under a changing climatic regime were projected with JABOWA-3 (an ecological process-based model). Model validation with designated data produced model efficiencies of 0.82 and 0.89 in predicting individual-tree BA and volume growth. Model efficiency is a relative index of model performance, where 1 indicates an ideal fit, while values lower than zero means the predictions are no better than the average of the observations. Overall mean prediction error (BIAS) of basal area and volume growth predictions was nominal (i.e., for BA: -0.0177 cm(2) 5-year(-1) and volume: 0.0008 m(3) 5-year(-1)). Model variability described by root mean squared error (RMSE) in basal area prediction was 40.53 cm(2) 5-year(-1) and 0.0393 m(3) 5-year(-1) in volume prediction. The new modelling approach has potential to reduce uncertainties in growth and yield predictions under different climate change scenarios. This novel approach provides an avenue for forest managers to generate required information for the management of forests in transitional periods of climate change. Artificial intelligence technology has substantial potential in forest modelling.
A Novel Modelling Approach for Predicting Forest Growth and Yield under Climate Change
Ashraf, M. Irfan; Meng, Fan-Rui; Bourque, Charles P.-A.; MacLean, David A.
2015-01-01
Global climate is changing due to increasing anthropogenic emissions of greenhouse gases. Forest managers need growth and yield models that can be used to predict future forest dynamics during the transition period of present-day forests under a changing climatic regime. In this study, we developed a forest growth and yield model that can be used to predict individual-tree growth under current and projected future climatic conditions. The model was constructed by integrating historical tree growth records with predictions from an ecological process-based model using neural networks. The new model predicts basal area (BA) and volume growth for individual trees in pure or mixed species forests. For model development, tree-growth data under current climatic conditions were obtained using over 3000 permanent sample plots from the Province of Nova Scotia, Canada. Data to reflect tree growth under a changing climatic regime were projected with JABOWA-3 (an ecological process-based model). Model validation with designated data produced model efficiencies of 0.82 and 0.89 in predicting individual-tree BA and volume growth. Model efficiency is a relative index of model performance, where 1 indicates an ideal fit, while values lower than zero means the predictions are no better than the average of the observations. Overall mean prediction error (BIAS) of basal area and volume growth predictions was nominal (i.e., for BA: -0.0177 cm2 5-year-1 and volume: 0.0008 m3 5-year-1). Model variability described by root mean squared error (RMSE) in basal area prediction was 40.53 cm2 5-year-1 and 0.0393 m3 5-year-1 in volume prediction. The new modelling approach has potential to reduce uncertainties in growth and yield predictions under different climate change scenarios. This novel approach provides an avenue for forest managers to generate required information for the management of forests in transitional periods of climate change. Artificial intelligence technology has substantial potential in forest modelling. PMID:26173081
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkar, Saradwata; Johnson, Timothy D.; Ma, Bing
2012-07-01
Purpose: Assuming that early tumor volume change is a biomarker for response to therapy, accurate quantification of early volume changes could aid in adapting an individual patient's therapy and lead to shorter clinical trials. We investigated an image registration-based approach for tumor volume change quantification that may more reliably detect smaller changes that occur in shorter intervals than can be detected by existing algorithms. Methods and Materials: Variance and bias of the registration-based approach were evaluated using retrospective, in vivo, very-short-interval diffusion magnetic resonance imaging scans where true zero tumor volume change is unequivocally known and synthetic data, respectively. Themore » interval scans were nonlinearly registered using two similarity measures: mutual information (MI) and normalized cross-correlation (NCC). Results: The 95% confidence interval of the percentage volume change error was (-8.93% to 10.49%) for MI-based and (-7.69%, 8.83%) for NCC-based registrations. Linear mixed-effects models demonstrated that error in measuring volume change increased with increase in tumor volume and decreased with the increase in the tumor's normalized mutual information, even when NCC was the similarity measure being optimized during registration. The 95% confidence interval of the relative volume change error for the synthetic examinations with known changes over {+-}80% of reference tumor volume was (-3.02% to 3.86%). Statistically significant bias was not demonstrated. Conclusion: A low-noise, low-bias tumor volume change measurement algorithm using nonlinear registration is described. Errors in change measurement were a function of tumor volume and the normalized mutual information content of the tumor.« less
Daugirdas, John T; Greene, Tom; Depner, Thomas A; Chumlea, Cameron; Rocco, Michael J; Chertow, Glenn M
2003-09-01
The modeled volume of urea distribution (Vm) in intermittently hemodialyzed patients is often compared with total body water (TBW) volume predicted from population studies of patient anthropometrics (Vant). Using data from the HEMO Study, we compared Vm determined by both blood-side and dialysate-side urea kinetic models with Vant as calculated by the Watson, Hume-Weyers, and Chertow anthropometric equations. Median levels of dialysate-based Vm and blood-based Vm agreed (43% and 44% of body weight, respectively). These volumes were lower than anthropometric estimates of TBW, which had median values of 52% to 55% of body weight for the three formulas evaluated. The difference between the Watson equation for TBW and modeled urea volume was greater in Caucasians (19%) than in African Americans (13%). Correlations between Vm and Vant determined by each of the three anthropometric estimation equations were similar; but Vant derived from the Watson formula had a slightly higher correlation with Vm. The difference between Vm and the anthropometric formulas was greatest with the Chertow equation, less with the Hume-Weyers formula, and least with the Watson estimate. The age term in the Watson equation for men that adjusts Vant downward with increasing age reduced an age effect on the difference between Vant and Vm in men. The findings show that kinetically derived values for V from blood-side and dialysate-side modeling are similar, and that these modeled urea volumes are lower by a substantial amount than anthropometric estimates of TBW. The higher values for anthropometry-derived TBW in hemodialyzed patients could be due to measurement errors. However, the possibility exists that TBW space is contracted in patients with end-stage renal disease (ESRD) or that the TBW space and the urea distribution space are not identical.
NASA Astrophysics Data System (ADS)
Bozhalkina, Yana
2017-12-01
Mathematical model of the loan portfolio structure change in the form of Markov chain is explored. This model considers in one scheme both the process of customers attraction, their selection based on the credit score, and loans repayment. The model describes the structure and volume of the loan portfolio dynamics, which allows to make medium-term forecasts of profitability and risk. Within the model corrective actions of bank management in order to increase lending volumes or to reduce the risk are formalized.
Incorporation of Condensation Heat Transfer in a Flow Network Code
NASA Technical Reports Server (NTRS)
Anthony, Miranda; Majumdar, Alok
2002-01-01
Pure water is distilled from waste water in the International Space Station. The distillation assembly consists of an evaporator, a compressor and a condenser. Vapor is periodically purged from the condenser to avoid vapor accumulation. Purged vapor is condensed in a tube by coolant water prior to entering the purge pump. The paper presents a condensation model of purged vapor in a tube. This model is based on the Finite Volume Method. In the Finite Volume Method, the flow domain is discretized into multiple control volumes and a simultaneous analysis is performed.
The quantitative modelling of human spatial habitability
NASA Technical Reports Server (NTRS)
Wise, James A.
1988-01-01
A theoretical model for evaluating human spatial habitability (HuSH) in the proposed U.S. Space Station is developed. Optimizing the fitness of the space station environment for human occupancy will help reduce environmental stress due to long-term isolation and confinement in its small habitable volume. The development of tools that operationalize the behavioral bases of spatial volume for visual kinesthetic, and social logic considerations is suggested. This report further calls for systematic scientific investigations of how much real and how much perceived volume people need in order to function normally and with minimal stress in space-based settings. The theoretical model presented in this report can be applied to any size or shape interior, at any scale of consideration, for the Space Station as a whole to an individual enclosure or work station. Using as a point of departure the Isovist model developed by Dr. Michael Benedikt of the U. of Texas, the report suggests that spatial habitability can become as amenable to careful assessment as engineering and life support concerns.
Cost drivers and resource allocation in military health care systems.
Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R
2007-03-01
This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.
Magdoom, Kulam Najmudeen; Pishko, Gregory L.; Rice, Lori; Pampo, Chris; Siemann, Dietmar W.; Sarntinoranont, Malisa
2014-01-01
Systemic drug delivery to solid tumors involving macromolecular therapeutic agents is challenging for many reasons. Amongst them is their chaotic microvasculature which often leads to inadequate and uneven uptake of the drug. Localized drug delivery can circumvent such obstacles and convection-enhanced delivery (CED) - controlled infusion of the drug directly into the tissue - has emerged as a promising delivery method for distributing macromolecules over larger tissue volumes. In this study, a three-dimensional MR image-based computational porous media transport model accounting for realistic anatomical geometry and tumor leakiness was developed for predicting the interstitial flow field and distribution of albumin tracer following CED into the hind-limb tumor (KHT sarcoma) in a mouse. Sensitivity of the model to changes in infusion flow rate, catheter placement and tissue hydraulic conductivity were investigated. The model predictions suggest that 1) tracer distribution is asymmetric due to heterogeneous porosity; 2) tracer distribution volume varies linearly with infusion volume within the whole leg, and exponentially within the tumor reaching a maximum steady-state value; 3) infusion at the center of the tumor with high flow rates leads to maximum tracer coverage in the tumor with minimal leakage outside; and 4) increasing the tissue hydraulic conductivity lowers the tumor interstitial fluid pressure and decreases the tracer distribution volume within the whole leg and tumor. The model thus predicts that the interstitial fluid flow and drug transport is sensitive to porosity and changes in extracellular space. This image-based model thus serves as a potential tool for exploring the effects of transport heterogeneity in tumors. PMID:24619021
A 4DCT imaging-based breathing lung model with relative hysteresis
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long
2016-01-01
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. PMID:28260811
A 4DCT imaging-based breathing lung model with relative hysteresis
NASA Astrophysics Data System (ADS)
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long
2016-12-01
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Z; Kennedy, A; Larsen, E
2015-06-15
Purpose: The study aims to develop and validate a knowledge based planning (KBP) model for external beam radiation therapy of locally advanced non-small cell lung cancer (LA-NSCLC). Methods: RapidPlan™ technology was used to develop a lung KBP model. Plans from 65 patients with LA-NSCLC were used to train the model. 25 patients were treated with VMAT, and the other patients were treated with IMRT. Organs-at-risk (OARs) included right lung, left lung, heart, esophagus, and spinal cord. DVH and geometric distribution DVH were extracted from the treated plans. The model was trained using principal component analysis and step-wise multiple regression. Boxmore » plot and regression plot tools were used to identify geometric outliers and dosimetry outliers and help fine-tune the model. The validation was performed by (a) comparing predicted DVH boundaries to actual DVHs of 63 patients and (b) using an independent set of treatment planning data. Results: 63 out of 65 plans were included in the final KBP model with PTV volume ranging from 102.5cc to 1450.2cc. Total treatment dose prescription varied from 50Gy to 70Gy based on institutional guidelines. One patient was excluded due to geometric outlier where 2.18cc of spinal cord was included in PTV. The other patient was excluded due to dosimetric outlier where the dose sparing to spinal cord was heavily enforced in the clinical plan. Target volume, OAR volume, OAR overlap volume percentage to target, and OAR out-of-field volume were included in the trained model. Lungs and heart had two principal component scores of GEDVH, whereas spinal cord and esophagus had three in the final model. Predicted DVH band (mean ±1 standard deviation) represented 66.2±3.6% of all DVHs. Conclusion: A KBP model was developed and validated for radiotherapy of LA-NSCLC in a commercial treatment planning system. The clinical implementation may improve the consistency of IMRT/VMAT planning.« less
Functional Data Analysis in NTCP Modeling: A New Method to Explore the Radiation Dose-Volume Effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benadjaoud, Mohamed Amine, E-mail: mohamedamine.benadjaoud@gustaveroussy.fr; Université Paris sud, Le Kremlin-Bicêtre; Institut Gustave Roussy, Villejuif
2014-11-01
Purpose/Objective(s): To describe a novel method to explore radiation dose-volume effects. Functional data analysis is used to investigate the information contained in differential dose-volume histograms. The method is applied to the normal tissue complication probability modeling of rectal bleeding (RB) for patients irradiated in the prostatic bed by 3-dimensional conformal radiation therapy. Methods and Materials: Kernel density estimation was used to estimate the individual probability density functions from each of the 141 rectum differential dose-volume histograms. Functional principal component analysis was performed on the estimated probability density functions to explore the variation modes in the dose distribution. The functional principalmore » components were then tested for association with RB using logistic regression adapted to functional covariates (FLR). For comparison, 3 other normal tissue complication probability models were considered: the Lyman-Kutcher-Burman model, logistic model based on standard dosimetric parameters (LM), and logistic model based on multivariate principal component analysis (PCA). Results: The incidence rate of grade ≥2 RB was 14%. V{sub 65Gy} was the most predictive factor for the LM (P=.058). The best fit for the Lyman-Kutcher-Burman model was obtained with n=0.12, m = 0.17, and TD50 = 72.6 Gy. In PCA and FLR, the components that describe the interdependence between the relative volumes exposed at intermediate and high doses were the most correlated to the complication. The FLR parameter function leads to a better understanding of the volume effect by including the treatment specificity in the delivered mechanistic information. For RB grade ≥2, patients with advanced age are significantly at risk (odds ratio, 1.123; 95% confidence interval, 1.03-1.22), and the fits of the LM, PCA, and functional principal component analysis models are significantly improved by including this clinical factor. Conclusion: Functional data analysis provides an attractive method for flexibly estimating the dose-volume effect for normal tissues in external radiation therapy.« less
Estimating Mixed Broadleaves Forest Stand Volume Using Dsm Extracted from Digital Aerial Images
NASA Astrophysics Data System (ADS)
Sohrabi, H.
2012-07-01
In mixed old growth broadleaves of Hyrcanian forests, it is difficult to estimate stand volume at plot level by remotely sensed data while LiDar data is absent. In this paper, a new approach has been proposed and tested for estimating stand forest volume. The approach is based on this idea that forest volume can be estimated by variation of trees height at plots. In the other word, the more the height variation in plot, the more the stand volume would be expected. For testing this idea, 120 circular 0.1 ha sample plots with systematic random design has been collected in Tonekaon forest located in Hyrcanian zone. Digital surface model (DSM) measure the height values of the first surface on the ground including terrain features, trees, building etc, which provides a topographic model of the earth's surface. The DSMs have been extracted automatically from aerial UltraCamD images so that ground pixel size for extracted DSM varied from 1 to 10 m size by 1m span. DSMs were checked manually for probable errors. Corresponded to ground samples, standard deviation and range of DSM pixels have been calculated. For modeling, non-linear regression method was used. The results showed that standard deviation of plot pixels with 5 m resolution was the most appropriate data for modeling. Relative bias and RMSE of estimation was 5.8 and 49.8 percent, respectively. Comparing to other approaches for estimating stand volume based on passive remote sensing data in mixed broadleaves forests, these results are more encouraging. One big problem in this method occurs when trees canopy cover is totally closed. In this situation, the standard deviation of height is low while stand volume is high. In future studies, applying forest stratification could be studied.
Organization-based Model-driven Development of High-assurance Multiagent Systems
2009-02-27
based Model -driven Development of High-assurance Multiagent Systems " performed by Dr. Scott A . DeLoach and Dr Robby at Kansas State University... A Capabilities Based Model for Artificial Organizations. Journal of Autonomous Agents and Multiagent Systems . Volume 16, no. 1, February 2008, pp...Matson, E . T. (2007). A capabilities based theory of artificial organizations. Journal of Autonomous Agents and Multiagent Systems
Heat transfer measurements for Stirling machine cylinders
NASA Technical Reports Server (NTRS)
Kornhauser, Alan A.; Kafka, B. C.; Finkbeiner, D. L.; Cantelmi, F. C.
1994-01-01
The primary purpose of this study was to measure the effects of inflow-produced heat turbulence on heat transfer in Stirling machine cylinders. A secondary purpose was to provide new experimental information on heat transfer in gas springs without inflow. The apparatus for the experiment consisted of a varying-volume piston-cylinder space connected to a fixed volume space by an orifice. The orifice size could be varied to adjust the level of inflow-produced turbulence, or the orifice plate could be removed completely so as to merge the two spaces into a single gas spring space. Speed, cycle mean pressure, overall volume ratio, and varying volume space clearance ratio could also be adjusted. Volume, pressure in both spaces, and local heat flux at two locations were measured. The pressure and volume measurements were used to calculate area averaged heat flux, heat transfer hysteresis loss, and other heat transfer-related effects. Experiments in the one space arrangement extended the range of previous gas spring tests to lower volume ratio and higher nondimensional speed. The tests corroborated previous results and showed that analytic models for heat transfer and loss based on volume ratio approaching 1 were valid for volume ratios ranging from 1 to 2, a range covering most gas springs in Stirling machines. Data from experiments in the two space arrangement were first analyzed based on lumping the two spaces together and examining total loss and averaged heat transfer as a function of overall nondimensional parameter. Heat transfer and loss were found to be significantly increased by inflow-produced turbulence. These increases could be modeled by appropriate adjustment of empirical coefficients in an existing semi-analytic model. An attempt was made to use an inverse, parameter optimization procedure to find the heat transfer in each of the two spaces. This procedure was successful in retrieving this information from simulated pressure-volume data with artificially generated noise, but it failed with the actual experimental data. This is evidence that the models used in the parameter optimization procedure (and to generate the simulated data) were not correct. Data from the surface heat flux sensors indicated that the primary shortcoming of these models was that they assumed turbulence levels to be constant over the cycle. Sensor data in the varying volume space showed a large increase in heat flux, probably due to turbulence, during the expansion stroke.
NASA Astrophysics Data System (ADS)
Kong, Lingxin; Yang, Bin; Xu, Baoqiang; Li, Yifu
2014-09-01
Based on the molecular interaction volume model (MIVM), the activities of components of Sn-Sb, Sb-Bi, Sn-Zn, Sn-Cu, and Sn-Ag alloys were predicted. The predicted values are in good agreement with the experimental data, which indicate that the MIVM is of better stability and reliability due to its good physical basis. A significant advantage of the MIVM lies in its ability to predict the thermodynamic properties of liquid alloys using only two parameters. The phase equilibria of Sn-Sb and Sn-Bi alloys were calculated based on the properties of pure components and the activity coefficients, which indicates that Sn-Sb and Sn-Bi alloys can be separated thoroughly by vacuum distillation. This study extends previous investigations and provides an effective and convenient model on which to base refining simulations for Sn-based alloys.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Antong; Deeley, Matthew A.; Niermann, Kenneth J.
2010-12-15
Purpose: Intensity-modulated radiation therapy (IMRT) is the state of the art technique for head and neck cancer treatment. It requires precise delineation of the target to be treated and structures to be spared, which is currently done manually. The process is a time-consuming task of which the delineation of lymph node regions is often the longest step. Atlas-based delineation has been proposed as an alternative, but, in the authors' experience, this approach is not accurate enough for routine clinical use. Here, the authors improve atlas-based segmentation results obtained for level II-IV lymph node regions using an active shape model (ASM)more » approach. Methods: An average image volume was first created from a set of head and neck patient images with minimally enlarged nodes. The average image volume was then registered using affine, global, and local nonrigid transformations to the other volumes to establish a correspondence between surface points in the atlas and surface points in each of the other volumes. Once the correspondence was established, the ASMs were created for each node level. The models were then used to first constrain the results obtained with an atlas-based approach and then to iteratively refine the solution. Results: The method was evaluated through a leave-one-out experiment. The ASM- and atlas-based segmentations were compared to manual delineations via the Dice similarity coefficient (DSC) for volume overlap and the Euclidean distance between manual and automatic 3D surfaces. The mean DSC value obtained with the ASM-based approach is 10.7% higher than with the atlas-based approach; the mean and median surface errors were decreased by 13.6% and 12.0%, respectively. Conclusions: The ASM approach is effective in reducing segmentation errors in areas of low CT contrast where purely atlas-based methods are challenged. Statistical analysis shows that the improvements brought by this approach are significant.« less
Chvetsov, Alexei V; Dong, Lei; Palta, Jantinder R; Amdur, Robert J
2009-10-01
To develop a fast computational radiobiologic model for quantitative analysis of tumor volume during fractionated radiotherapy. The tumor-volume model can be useful for optimizing image-guidance protocols and four-dimensional treatment simulations in proton therapy that is highly sensitive to physiologic changes. The analysis is performed using two approximations: (1) tumor volume is a linear function of total cell number and (2) tumor-cell population is separated into four subpopulations: oxygenated viable cells, oxygenated lethally damaged cells, hypoxic viable cells, and hypoxic lethally damaged cells. An exponential decay model is used for disintegration and removal of oxygenated lethally damaged cells from the tumor. We tested our model on daily volumetric imaging data available for 14 head-and-neck cancer patients treated with an integrated computed tomography/linear accelerator system. A simulation based on the averaged values of radiobiologic parameters was able to describe eight cases during the entire treatment and four cases partially (50% of treatment time) with a maximum 20% error. The largest discrepancies between the model and clinical data were obtained for small tumors, which may be explained by larger errors in the manual tumor volume delineation procedure. Our results indicate that the change in gross tumor volume for head-and-neck cancer can be adequately described by a relatively simple radiobiologic model. In future research, we propose to study the variation of model parameters by fitting to clinical data for a cohort of patients with head-and-neck cancer and other tumors. The potential impact of other processes, like concurrent chemotherapy, on tumor volume should be evaluated.
Colonic transit time and pressure based on Bernoulli's principle.
Uno, Yoshiharu
2018-01-01
Variations in the caliber of human large intestinal tract causes changes in pressure and the velocity of its contents, depending on flow volume, gravity, and density, which are all variables of Bernoulli's principle. Therefore, it was hypothesized that constipation and diarrhea can occur due to changes in the colonic transit time (CTT), according to Bernoulli's principle. In addition, it was hypothesized that high amplitude peristaltic contractions (HAPC), which are considered to be involved in defecation in healthy subjects, occur because of cecum pressure based on Bernoulli's principle. A virtual healthy model (VHM), a virtual constipation model and a virtual diarrhea model were set up. For each model, the CTT was decided according to the length of each part of the colon, and then calculating the velocity due to the cecum inflow volume. In the VHM, the pressure change was calculated, then its consistency with HAPC was verified. The CTT changed according to the difference between the cecum inflow volume and the caliber of the intestinal tract, and was inversely proportional to the cecum inflow volume. Compared with VHM, the CTT was prolonged in the virtual constipation model, and shortened in the virtual diarrhea model. The calculated pressure of the VHM and the gradient of the interlocked graph were similar to that of HAPC. The CTT and HAPC can be explained by Bernoulli's principle, and constipation and diarrhea may be fundamentally influenced by flow dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Jinghao; Kim, Sung; Jabbour, Salma
2010-03-15
Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CTmore » (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to 6.54 mm for ASM. The volume overlap ratio ranged from 79% to 91% for ACRASM and from 44% to 80% for ASM. These data demonstrated that the segmentation results of ACRASM were in better agreement with the corresponding benchmarks than those of ASM. The developed registration algorithm was quantitatively evaluated by comparing the registered target volumes from the pCT to the benchmarks on the CBCT. The mean distance and the root mean square error ranged from 0.38 to 2.2 mm and from 0.45 to 2.36 mm, respectively, between the CBCT images and the registered pCT. The mean overlap ratio of the prostate volumes ranged from 85.2% to 95% after registration. The average time of the ACRASM-based segmentation was under 1 min. The average time of the global transformation was from 2 to 4 min on two 3D volumes and the average time of the local transformation was from 20 to 34 s on two deformable superquadrics mesh models. Conclusions: A novel and fast segmentation and deformable registration method was developed to capture the transformation between the planning and treatment images for external beam radiotherapy of prostate cancers. This method increases the computational efficiency and may provide foundation to achieve real time adaptive radiotherapy.« less
Carbone, V; Fluit, R; Pellikaan, P; van der Krogt, M M; Janssen, D; Damsgaard, M; Vigneron, L; Feilkas, T; Koopman, H F J M; Verdonschot, N
2015-03-18
When analyzing complex biomechanical problems such as predicting the effects of orthopedic surgery, subject-specific musculoskeletal models are essential to achieve reliable predictions. The aim of this paper is to present the Twente Lower Extremity Model 2.0, a new comprehensive dataset of the musculoskeletal geometry of the lower extremity, which is based on medical imaging data and dissection performed on the right lower extremity of a fresh male cadaver. Bone, muscle and subcutaneous fat (including skin) volumes were segmented from computed tomography and magnetic resonance images scans. Inertial parameters were estimated from the image-based segmented volumes. A complete cadaver dissection was performed, in which bony landmarks, attachments sites and lines-of-action of 55 muscle actuators and 12 ligaments, bony wrapping surfaces, and joint geometry were measured. The obtained musculoskeletal geometry dataset was finally implemented in the AnyBody Modeling System (AnyBody Technology A/S, Aalborg, Denmark), resulting in a model consisting of 12 segments, 11 joints and 21 degrees of freedom, and including 166 muscle-tendon elements for each leg. The new TLEM 2.0 dataset was purposely built to be easily combined with novel image-based scaling techniques, such as bone surface morphing, muscle volume registration and muscle-tendon path identification, in order to obtain subject-specific musculoskeletal models in a quick and accurate way. The complete dataset, including CT and MRI scans and segmented volume and surfaces, is made available at http://www.utwente.nl/ctw/bw/research/projects/TLEMsafe for the biomechanical community, in order to accelerate the development and adoption of subject-specific models on large scale. TLEM 2.0 is freely shared for non-commercial use only, under acceptance of the TLEMsafe Research License Agreement. Copyright © 2014 Elsevier Ltd. All rights reserved.
Thermal Expert System (TEXSYS): Systems autonomy demonstration project, volume 2. Results
NASA Technical Reports Server (NTRS)
Glass, B. J. (Editor)
1992-01-01
The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS testbed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.
Thermal Expert System (TEXSYS): Systems automony demonstration project, volume 1. Overview
NASA Technical Reports Server (NTRS)
Glass, B. J. (Editor)
1992-01-01
The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS test bed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.
Thermal Expert System (TEXSYS): Systems autonomy demonstration project, volume 2. Results
NASA Astrophysics Data System (ADS)
Glass, B. J.
1992-10-01
The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS testbed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.
ERIC Educational Resources Information Center
Naval Training Equipment Center, Orlando, FL. Training Analysis and Evaluation Group.
The Design of Training Systems (DOTS) project was initiated by the Department of Defense (DOD) to develop tools for the effective management of military training organizations. Volume 3 contains the model and data base program descriptions and operating procedures designed for phase 2 of the project. Flow charts and program listings for the…
CT contrast predicts pancreatic cancer treatment response to verteporfin-based photodynamic therapy
NASA Astrophysics Data System (ADS)
Jermyn, Michael; Davis, Scott C.; Dehghani, Hamid; Huggett, Matthew T.; Hasan, Tayyaba; Pereira, Stephen P.; Bown, Stephen G.; Pogue, Brian W.
2014-04-01
The goal of this study was to determine dominant factors affecting treatment response in pancreatic cancer photodynamic therapy (PDT), based on clinically available information in the VERTPAC-01 trial. This trial investigated the safety and efficacy of verteporfin PDT in 15 patients with locally advanced pancreatic adenocarcinoma. CT scans before and after contrast enhancement from the 15 patients in the VERTPAC-01 trial were used to determine venous-phase blood contrast enhancement and this was correlated with necrotic volume determined from post-treatment CT scans, along with estimation of optical absorption in the pancreas for use in light modeling of the PDT treatment. Energy threshold contours yielded estimates for necrotic volume based on this light modeling. Both contrast-derived venous blood content and necrotic volume from light modeling yielded strong correlations with observed necrotic volume (R2 = 0.85 and 0.91, respectively). These correlations were much stronger than those obtained by correlating energy delivered versus necrotic volume in the VERTPAC-01 study and in retrospective analysis from a prior clinical study. This demonstrates that contrast CT can provide key surrogate dosimetry information to assess treatment response. It also implies that light attenuation is likely the dominant factor in the VERTPAC treatment response, as opposed to other factors such as drug distribution. This study is the first to show that contrast CT provides needed surrogate dosimetry information to predict treatment response in a manner which uses standard-of-care clinical images, rather than invasive dosimetry methods.
Representative volume element model of lithium-ion battery electrodes based on X-ray nano-tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashkooli, Ali Ghorbani; Amirfazli, Amir; Farhad, Siamak
For this, a new model that keeps all major advantages of the single-particle model of lithium-ion batteries (LIBs) and includes three-dimensional structure of the electrode was developed. Unlike the single spherical particle, this model considers a small volume element of an electrode, called the Representative Volume Element (RVE), which represent the real electrode structure. The advantages of using RVE as the model geometry was demonstrated for a typical LIB electrode consisting of nano-particle LiFePO 4 (LFP) active material. The three-dimensional morphology of the LFP electrode was reconstructed using a synchrotron X-ray nano-computed tomography at the Advanced Photon Source of themore » Argonne National. A 27 μm 3 cube from reconstructed structure was chosen as the RVE for the simulation purposes. The model was employed to predict the voltage curve in a half-cell during galvanostatic operations and validated with experimental data. The simulation results showed that the distribution of lithium inside the electrode microstructure is very different from the results obtained based on the single-particle model. The range of lithium concentration is found to be much greater, successfully illustrates the effect of microstructure heterogeneity.« less
Representative volume element model of lithium-ion battery electrodes based on X-ray nano-tomography
Kashkooli, Ali Ghorbani; Amirfazli, Amir; Farhad, Siamak; ...
2017-01-28
For this, a new model that keeps all major advantages of the single-particle model of lithium-ion batteries (LIBs) and includes three-dimensional structure of the electrode was developed. Unlike the single spherical particle, this model considers a small volume element of an electrode, called the Representative Volume Element (RVE), which represent the real electrode structure. The advantages of using RVE as the model geometry was demonstrated for a typical LIB electrode consisting of nano-particle LiFePO 4 (LFP) active material. The three-dimensional morphology of the LFP electrode was reconstructed using a synchrotron X-ray nano-computed tomography at the Advanced Photon Source of themore » Argonne National. A 27 μm 3 cube from reconstructed structure was chosen as the RVE for the simulation purposes. The model was employed to predict the voltage curve in a half-cell during galvanostatic operations and validated with experimental data. The simulation results showed that the distribution of lithium inside the electrode microstructure is very different from the results obtained based on the single-particle model. The range of lithium concentration is found to be much greater, successfully illustrates the effect of microstructure heterogeneity.« less
Formation of a disordered solid via a shock-induced transition in a dense particle suspension
NASA Astrophysics Data System (ADS)
Petel, Oren E.; Frost, David L.; Higgins, Andrew J.; Ouellet, Simon
2012-02-01
Shock wave propagation in multiphase media is typically dominated by the relative compressibility of the two components of the mixture. The difference in the compressibility of the components results in a shock-induced variation in the effective volume fraction of the suspension tending toward the random-close-packing limit for the system, and a disordered solid can take form within the suspension. The present study uses a Hugoniot-based model to demonstrate this variation in the volume fraction of the solid phase as well as a simple hard-sphere model to investigate the formation of disordered structures within uniaxially compressed model suspensions. Both models are discussed in terms of available experimental plate impact data in dense suspensions. Through coordination number statistics of the mesoscopic hard-sphere model, comparisons are made with the trends of the experimental pressure-volume fraction relationship to illustrate the role of these disordered structures in the bulk properties of the suspensions. A criterion for the dynamic stiffening of suspensions under high-rate dynamic loading is suggested as an analog to quasi-static jamming based on the results of the simulations.
NASA Technical Reports Server (NTRS)
Hollis, Brian R.
1995-01-01
A FORTRAN computer code for the reduction and analysis of experimental heat transfer data has been developed. This code can be utilized to determine heat transfer rates from surface temperature measurements made using either thin-film resistance gages or coaxial surface thermocouples. Both an analytical and a numerical finite-volume heat transfer model are implemented in this code. The analytical solution is based on a one-dimensional, semi-infinite wall thickness model with the approximation of constant substrate thermal properties, which is empirically corrected for the effects of variable thermal properties. The finite-volume solution is based on a one-dimensional, implicit discretization. The finite-volume model directly incorporates the effects of variable substrate thermal properties and does not require the semi-finite wall thickness approximation used in the analytical model. This model also includes the option of a multiple-layer substrate. Fast, accurate results can be obtained using either method. This code has been used to reduce several sets of aerodynamic heating data, of which samples are included in this report.
Voluminator 2.0 - Speeding up the Approximation of the Volume of Defective 3d Building Models
NASA Astrophysics Data System (ADS)
Sindram, M.; Machl, T.; Steuer, H.; Pültz, M.; Kolbe, T. H.
2016-06-01
Semantic 3D city models are increasingly used as a data source in planning and analyzing processes of cities. They represent a virtual copy of the reality and are a common information base and source of information for examining urban questions. A significant advantage of virtual city models is that important indicators such as the volume of buildings, topological relationships between objects and other geometric as well as thematic information can be derived. Knowledge about the exact building volume is an essential base for estimating the building energy demand. In order to determine the volume of buildings with conventional algorithms and tools, the buildings may not contain any topological and geometrical errors. The reality, however, shows that city models very often contain errors such as missing surfaces, duplicated faces and misclosures. To overcome these errors (Steuer et al., 2015) have presented a robust method for approximating the volume of building models. For this purpose, a bounding box of the building is divided into a regular grid of voxels and it is determined which voxels are inside the building. The regular arrangement of the voxels leads to a high number of topological tests and prevents the application of this method using very high resolutions. In this paper we present an extension of the algorithm using an octree approach limiting the subdivision of space to regions around surfaces of the building models and to regions where, in the case of defective models, the topological tests are inconclusive. We show that the computation time can be significantly reduced, while preserving the robustness against geometrical and topological errors.
Validation of a White-light 3D Body Volume Scanner to Assess Body Composition.
Medina-Inojosa, Jose; Somers, Virend; Jenkins, Sarah; Zundel, Jennifer; Johnson, Lynne; Grimes, Chassidy; Lopez-Jimenez, Francisco
2017-01-01
Estimating body fat content has shown to be a better predictor of adiposity-related cardiovascular risk than the commonly used body mass index (BMI). The white-light 3D body volume index (BVI) scanner is a non-invasive device normally used in the clothing industry to assess body shapes and sizes. We assessed the hypothesis that volume obtained by BVI is comparable to the volume obtained by air displacement plethysmography (Bod-Pod) and thus capable of assessing body fat mass using the bi-compartmental principles of body composition. We compared BVI to Bod-pod, a validated bicompartmental method to assess body fat percent that uses pressure/volume relationships in isothermal conditions to estimate body volume. Volume is then used to calculate body density (BD) applying the formula density=Body Mass/Volume. Body fat mass percentage is then calculated using the Siri formula (4.95/BD - 4.50) × 100. Subjects were undergoing a wellness evaluation. Measurements from both devices were obtained the same day. A prediction model for total Bod-pod volume was developed using linear regression based on 80% of the observations (N=971), as follows: Predicted Bod-pod Volume (L)=9.498+0.805*(BVI volume, L)-0.0411*(Age, years)-3.295*(Male=0, Female=1)+0.0554*(BVI volume, L)*(Male=0, Female=1)+0.0282*(Age, years)*(Male=0, Female=1). Predictions for Bod-pod volume based on the estimated model were then calculated for the remaining 20% (N=243) and compared to the volume measured by the Bod-pod. Mean age among the 971 individuals was 41.5 ± 12.9 years, 39.4% were men, weight 81.6 ± 20.9 kg, BMI was 27.8 ± 6.3kg/m 2 . Average difference between volume measured by Bod-pod- predicted volume by BVI was 0.0 L, median: -0.4 L, IQR: -1.8 L to 1.5 L, R2=0.9845. Average difference between body fat measured-predicted was-1%, median: -2.7%, IQR: -13.2 to 9.9, R2=0.9236. Volume and BFM can be estimated by using volume measurements obtained by a white- light 3D body scanner and the prediction model developed in this study.
Valinoti, Maddalena; Fabbri, Claudio; Turco, Dario; Mantovan, Roberto; Pasini, Antonio; Corsi, Cristiana
2018-01-01
Radiofrequency ablation (RFA) is an important and promising therapy for atrial fibrillation (AF) patients. Optimization of patient selection and the availability of an accurate anatomical guide could improve RFA success rate. In this study we propose a unified, fully automated approach to build a 3D patient-specific left atrium (LA) model including pulmonary veins (PVs) in order to provide an accurate anatomical guide during RFA and without PVs in order to characterize LA volumetry and support patient selection for AF ablation. Magnetic resonance data from twenty-six patients referred for AF RFA were processed applying an edge-based level set approach guided by a phase-based edge detector to obtain the 3D LA model with PVs. An automated technique based on the shape diameter function was designed and applied to remove PVs and compute LA volume. 3D LA models were qualitatively compared with 3D LA surfaces acquired during the ablation procedure. An expert radiologist manually traced the LA on MR images twice. LA surfaces from the automatic approach and manual tracing were compared by mean surface-to-surface distance. In addition, LA volumes were compared with volumes from manual segmentation by linear and Bland-Altman analyses. Qualitative comparison of 3D LA models showed several inaccuracies, in particular PVs reconstruction was not accurate and left atrial appendage was missing in the model obtained during RFA procedure. LA surfaces were very similar (mean surface-to-surface distance: 2.3±0.7mm). LA volumes were in excellent agreement (y=1.03x-1.4, r=0.99, bias=-1.37ml (-1.43%) SD=2.16ml (2.3%), mean percentage difference=1.3%±2.1%). Results showed the proposed 3D patient-specific LA model with PVs is able to better describe LA anatomy compared to models derived from the navigation system, thus potentially improving electrograms and voltage information location and reducing fluoroscopic time during RFA. Quantitative assessment of LA volume derived from our 3D LA model without PVs is also accurate and may provide important information for patient selection for RFA. Copyright © 2017 Elsevier Inc. All rights reserved.
Topology-aware illumination design for volume rendering.
Zhou, Jianlong; Wang, Xiuying; Cui, Hui; Gong, Peng; Miao, Xianglin; Miao, Yalin; Xiao, Chun; Chen, Fang; Feng, Dagan
2016-08-19
Direct volume rendering is one of flexible and effective approaches to inspect large volumetric data such as medical and biological images. In conventional volume rendering, it is often time consuming to set up a meaningful illumination environment. Moreover, conventional illumination approaches usually assign same values of variables of an illumination model to different structures manually and thus neglect the important illumination variations due to structure differences. We introduce a novel illumination design paradigm for volume rendering on the basis of topology to automate illumination parameter definitions meaningfully. The topological features are extracted from the contour tree of an input volumetric data. The automation of illumination design is achieved based on four aspects of attenuation, distance, saliency, and contrast perception. To better distinguish structures and maximize illuminance perception differences of structures, a two-phase topology-aware illuminance perception contrast model is proposed based on the psychological concept of Just-Noticeable-Difference. The proposed approach allows meaningful and efficient automatic generations of illumination in volume rendering. Our results showed that our approach is more effective in depth and shape depiction, as well as providing higher perceptual differences between structures.
Delay functions in trip assignment for transport planning process
NASA Astrophysics Data System (ADS)
Leong, Lee Vien
2017-10-01
In transportation planning process, volume-delay and turn-penalty functions are the functions needed in traffic assignment to determine travel time on road network links. Volume-delay function is the delay function describing speed-flow relationship while turn-penalty function is the delay function associated to making a turn at intersection. The volume-delay function used in this study is the revised Bureau of Public Roads (BPR) function with the constant parameters, α and β values of 0.8298 and 3.361 while the turn-penalty functions for signalized intersection were developed based on uniform, random and overflow delay models. Parameters such as green time, cycle time and saturation flow were used in the development of turn-penalty functions. In order to assess the accuracy of the delay functions, road network in areas of Nibong Tebal, Penang and Parit Buntar, Perak was developed and modelled using transportation demand forecasting software. In order to calibrate the models, phase times and traffic volumes at fourteen signalised intersections within the study area were collected during morning and evening peak hours. The prediction of assigned volumes using the revised BPR function and the developed turn-penalty functions show close agreement to actual recorded traffic volume with the lowest percentage of accuracy, 80.08% and the highest, 93.04% for the morning peak model. As for the evening peak model, they were 75.59% and 95.33% respectively for lowest and highest percentage of accuracy. As for the yield left-turn lanes, the lowest percentage of accuracy obtained for the morning and evening peak models were 60.94% and 69.74% respectively while the highest percentage of accuracy obtained for both models were 100%. Therefore, can be concluded that the development and utilisation of delay functions based on local road conditions are important as localised delay functions can produce better estimate of link travel times and hence better planning for future scenarios.
Body Fluid Regulation and Hemopoiesis in Space Flight
NASA Technical Reports Server (NTRS)
1997-01-01
In this session, Session JA2, the discussion focuses on the following topics: Bodymass and Fluid Distribution During Longterm Spaceflight with and without Countermeasures; Plasma Volume, Extracellular Fluid Volume, and Regulatory Hormones During Long-Term Space Flight; Effect of Microgravity and its Ground-Based Models on Fluid Volumes and Hemocirculatory Volumes; Seventeen Weeks of Horizontal Bed Rest, Lower Body Negative Pressure Testing, and the Associated Plasma Volume Response; Evaporative Waterloss in Space Theoretical and Experimental Studies; Erythropoietin Under Real and Simulated Micro-G Conditions in Humans; and Vertebral Bone Marrow Changes Following Space Flight.
NASA Astrophysics Data System (ADS)
Cordero-Llana, L.; Selmes, N.; Murray, T.; Scharrer, K.; Booth, A. D.
2012-12-01
Large volumes of water are necessary to propagate cracks to the glacial bed via hydrofractures. Hydrological models have shown that lakes above a critical volume can supply the necessary water for this process, so the ability to measure water depth in lakes remotely is important to study these processes. Previously, water depth has been derived from the optical properties of water using data from high resolution optical satellite images, as such ASTER, (Advanced Spaceborne Thermal Emission and Reflection Radiometer), IKONOS and LANDSAT. These studies used water-reflectance models based on the Bouguer-Lambert-Beer law and lack any estimation of model uncertainties. We propose an optimized model based on Sneed and Hamilton's (2007) approach to estimate water depths in supraglacial lakes and undertake a robust analysis of the errors for the first time. We used atmospherically-corrected data from ASTER and MODIS data as an input to the water-reflectance model. Three physical parameters are needed: namely bed albedo, water attenuation coefficient and reflectance of optically-deep water. These parameters were derived for each wavelength using standard calibrations. As a reference dataset, we obtained lake geometries using ICESat measurements over empty lakes. Differences between modeled and reference depths are used in a minimization model to obtain parameters for the water-reflectance model, yielding optimized lake depth estimates. Our key contribution is the development of a Monte Carlo simulation to run the water-reflectance model, which allows us to quantify the uncertainties in water depth and hence water volume. This robust statistical analysis provides better understanding of the sensitivity of the water-reflectance model to the choice of input parameters, which should contribute to the understanding of the influence of surface-derived melt-water on ice sheet dynamics. Sneed, W.A. and Hamilton, G.S., 2007: Evolution of melt pond volume on the surface of the Greenland Ice Sheet. Geophysical Research Letters, 34, 1-4.
FOSSIL2 energy policy model documentation: FOSSIL2 documentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1980-10-01
This report discusses the structure, derivations, assumptions, and mathematical formulation of the FOSSIL2 model. Each major facet of the model - supply/demand interactions, industry financing, and production - has been designed to parallel closely the actual cause/effect relationships determining the behavior of the United States energy system. The data base for the FOSSIL2 program is large, as is appropriate for a system dynamics simulation model. When possible, all data were obtained from sources well known to experts in the energy field. Cost and resource estimates are based on DOE data whenever possible. This report presents the FOSSIL2 model at severalmore » levels. Volumes II and III of this report list the equations that comprise the FOSSIL2 model, along with variable definitions and a cross-reference list of the model variables. Volume III lists the model equations and a one line definition for equations, in a short, readable format.« less
Yu, Alex; Jackson, Trachette; Tsume, Yasuhiro; Koenigsknecht, Mark; Wysocki, Jeffrey; Marciani, Luca; Amidon, Gordon L; Frances, Ann; Baker, Jason R; Hasler, William; Wen, Bo; Pai, Amit; Sun, Duxin
2017-11-01
Gastrointestinal (GI) fluid volume and its dynamic change are integral to study drug disintegration, dissolution, transit, and absorption. However, key questions regarding the local volume and its absorption, secretion, and transit remain unanswered. The dynamic fluid compartment absorption and transit (DFCAT) model is proposed to estimate in vivo GI volume and GI fluid transport based on magnetic resonance imaging (MRI) quantified fluid volume. The model was validated using GI local concentration of phenol red in human GI tract, which was directly measured by human GI intubation study after oral dosing of non-absorbable phenol red. The measured local GI concentration of phenol red ranged from 0.05 to 168 μg/mL (stomach), to 563 μg/mL (duodenum), to 202 μg/mL (proximal jejunum), and to 478 μg/mL (distal jejunum). The DFCAT model characterized observed MRI fluid volume and its dynamic changes from 275 to 46.5 mL in stomach (from 0 to 30 min) with mucus layer volume of 40 mL. The volumes of the 30 small intestine compartments were characterized by a max of 14.98 mL to a min of 0.26 mL (0-120 min) and a mucus layer volume of 5 mL per compartment. Regional fluid volumes over 0 to 120 min ranged from 5.6 to 20.38 mL in the proximal small intestine, 36.4 to 44.08 mL in distal small intestine, and from 42 to 64.46 mL in total small intestine. The DFCAT model can be applied to predict drug dissolution and absorption in the human GI tract with future improvements.
Back to the future: estimating pre-injury brain volume in patients with traumatic brain injury.
Ross, David E; Ochs, Alfred L; D Zannoni, Megan; Seabaugh, Jan M
2014-11-15
A recent meta-analysis by Hedman et al. allows for accurate estimation of brain volume changes throughout the life span. Additionally, Tate et al. showed that intracranial volume at a later point in life can be used to estimate reliably brain volume at an earlier point in life. These advancements were combined to create a model which allowed the estimation of brain volume just prior to injury in a group of patients with mild or moderate traumatic brain injury (TBI). This volume estimation model was used in combination with actual measurements of brain volume to test hypotheses about progressive brain volume changes in the patients. Twenty six patients with mild or moderate TBI were compared to 20 normal control subjects. NeuroQuant® was used to measure brain MRI volume. Brain volume after the injury (from MRI scans performed at t1 and t2) was compared to brain volume just before the injury (volume estimation at t0) using longitudinal designs. Groups were compared with respect to volume changes in whole brain parenchyma (WBP) and its 3 major subdivisions: cortical gray matter (GM), cerebral white matter (CWM) and subcortical nuclei+infratentorial regions (SCN+IFT). Using the normal control data, the volume estimation model was tested by comparing measured brain volume to estimated brain volume; reliability ranged from good to excellent. During the initial phase after injury (t0-t1), the TBI patients had abnormally rapid atrophy of WBP and CWM, and abnormally rapid enlargement of SCN+IFT. Rates of volume change during t0-t1 correlated with cross-sectional measures of volume change at t1, supporting the internal reliability of the volume estimation model. A logistic regression analysis using the volume change data produced a function which perfectly predicted group membership (TBI patients vs. normal control subjects). During the first few months after injury, patients with mild or moderate TBI have rapid atrophy of WBP and CWM, and rapid enlargement of SCN+IFT. The magnitude and pattern of the changes in volume may allow for the eventual development of diagnostic tools based on the volume estimation approach. Copyright © 2014 Elsevier Inc. All rights reserved.
SU-F-R-51: Radiomics in CT Perfusion Maps of Head and Neck Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nesteruk, M; Riesterer, O; Veit-Haibach, P
2016-06-15
Purpose: The aim of this study was to test the predictive value of radiomics features of CT perfusion (CTP) for tumor control, based on a preselection of radiomics features in a robustness study. Methods: 11 patients with head and neck cancer (HNC) and 11 patients with lung cancer were included in the robustness study to preselect stable radiomics parameters. Data from 36 HNC patients treated with definitive radiochemotherapy (median follow-up 30 months) was used to build a predictive model based on these parameters. All patients underwent pre-treatment CTP. 315 texture parameters were computed for three perfusion maps: blood volume, bloodmore » flow and mean transit time. The variability of texture parameters was tested with respect to non-standardizable perfusion computation factors (noise level and artery contouring) using intraclass correlation coefficients (ICC). The parameter with the highest ICC in the correlated group of parameters (inter-parameter Spearman correlations) was tested for its predictive value. The final model to predict tumor control was built using multivariate Cox regression analysis with backward selection of the variables. For comparison, a predictive model based on tumor volume was created. Results: Ten parameters were found to be stable in both HNC and lung cancer regarding potentially non-standardizable factors after the correction for inter-parameter correlations. In the multivariate backward selection of the variables, blood flow entropy showed a highly significant impact on tumor control (p=0.03) with concordance index (CI) of 0.76. Blood flow entropy was significantly lower in the patient group with controlled tumors at 18 months (p<0.1). The new model showed a higher concordance index compared to the tumor volume model (CI=0.68). Conclusion: The preselection of variables in the robustness study allowed building a predictive radiomics-based model of tumor control in HNC despite a small patient cohort. This model was found to be superior to the volume-based model. The project was supported by the KFSP Tumor Oxygenation of the University of Zurich, by a grant of the Center for Clinical Research, University and University Hospital Zurich and by a research grant from Merck (Schweiz) AG.« less
How large is the typical subarachnoid hemorrhage? A review of current neurosurgical knowledge.
Whitmore, Robert G; Grant, Ryan A; LeRoux, Peter; El-Falaki, Omar; Stein, Sherman C
2012-01-01
Despite the morbidity and mortality of subarachnoid hemorrhage (SAH), the average volume of a typical hemorrhage is not well defined. Animal models of SAH often do not accurately mimic the human disease process. The purpose of this study is to estimate the average SAH volume, allowing standardization of animal models of the disease. We performed a MEDLINE search of SAH volume and erythrocyte counts in human cerebrospinal fluid as well as for volumes of blood used in animal injection models of SAH, from 1956 to 2010. We polled members of the American Association of Neurological Surgeons (AANS) for estimates of typical SAH volume. Using quantitative data from the literature, we calculated the total volume of SAH as equal to the volume of blood clotted in basal cisterns plus the volume of dispersed blood in cerebrospinal fluid. The results of the AANS poll confirmed our estimates. The human literature yielded 322 publications and animal literature, 237 studies. Four quantitative human studies reported blood clot volumes ranging from 0.2 to 170 mL, with a mean of ∼20 mL. There was only one quantitative study reporting cerebrospinal fluid red blood cell counts from serial lumbar puncture after SAH. Dispersed blood volume ranged from 2.9 to 45.9 mL, and we used the mean of 15 mL for our calculation. Therefore, total volume of SAH equals 35 mL. The AANS poll yielded 176 responses, ranging from 2 to 350 mL, with a mean of 33.9 ± 4.4 mL. Based on our estimate of total SAH volume of 35 mL, animal injection models may now become standardized for more accurate portrayal of the human disease process. Copyright © 2012 Elsevier Inc. All rights reserved.
Constantin, Julian Gelman; Schneider, Matthias; Corti, Horacio R
2016-06-09
The glass transition temperature of trehalose, sucrose, glucose, and fructose aqueous solutions has been predicted as a function of the water content by using the free volume/percolation model (FVPM). This model only requires the molar volume of water in the liquid and supercooled regimes, the molar volumes of the hypothetical pure liquid sugars at temperatures below their pure glass transition temperatures, and the molar volumes of the mixtures at the glass transition temperature. The model is simplified by assuming that the excess thermal expansion coefficient is negligible for saccharide-water mixtures, and this ideal FVPM becomes identical to the Gordon-Taylor model. It was found that the behavior of the water molar volume in trehalose-water mixtures at low temperatures can be obtained by assuming that the FVPM holds for this mixture. The temperature dependence of the water molar volume in the supercooled region of interest seems to be compatible with the recent hypothesis on the existence of two structure of liquid water, being the high density liquid water the state of water in the sugar solutions. The idealized FVPM describes the measured glass transition temperature of sucrose, glucose, and fructose aqueous solutions, with much better accuracy than both the Gordon-Taylor model based on an empirical kGT constant dependent on the saccharide glass transition temperature and the Couchman-Karasz model using experimental heat capacity changes of the components at the glass transition temperature. Thus, FVPM seems to be an excellent tool to predict the glass transition temperature of other aqueous saccharides and polyols solutions by resorting to volumetric information easily available.
Initial retrieval sequence and blending strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pemwell, D.L.; Grenard, C.E.
1996-09-01
This report documents the initial retrieval sequence and the methodology used to select it. Waste retrieval, storage, pretreatment and vitrification were modeled for candidate single-shell tank retrieval sequences. Performance of the sequences was measured by a set of metrics (for example,high-level waste glass volume, relative risk and schedule).Computer models were used to evaluate estimated glass volumes,process rates, retrieval dates, and blending strategy effects.The models were based on estimates of component inventories and concentrations, sludge wash factors and timing, retrieval annex limitations, etc.
Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Ma, Hsin-I; Hsu, Hsian-He; Juan, Chun-Jung
2018-01-01
We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey's, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey's formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey's formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas.
Computer Modeling to Evaluate the Impact of Technology Changes on Resident Procedural Volume.
Grenda, Tyler R; Ballard, Tiffany N S; Obi, Andrea T; Pozehl, William; Seagull, F Jacob; Chen, Ryan; Cohn, Amy M; Daskin, Mark S; Reddy, Rishindra M
2016-12-01
As resident "index" procedures change in volume due to advances in technology or reliance on simulation, it may be difficult to ensure trainees meet case requirements. Training programs are in need of metrics to determine how many residents their institutional volume can support. As a case study of how such metrics can be applied, we evaluated a case distribution simulation model to examine program-level mediastinoscopy and endobronchial ultrasound (EBUS) volumes needed to train thoracic surgery residents. A computer model was created to simulate case distribution based on annual case volume, number of trainees, and rotation length. Single institutional case volume data (2011-2013) were applied, and 10 000 simulation years were run to predict the likelihood (95% confidence interval) of all residents (4 trainees) achieving board requirements for operative volume during a 2-year program. The mean annual mediastinoscopy volume was 43. In a simulation of pre-2012 board requirements (thoracic pathway, 25; cardiac pathway, 10), there was a 6% probability of all 4 residents meeting requirements. Under post-2012 requirements (thoracic, 15; cardiac, 10), however, the likelihood increased to 88%. When EBUS volume (mean 19 cases per year) was concurrently evaluated in the post-2012 era (thoracic, 10; cardiac, 0), the likelihood of all 4 residents meeting case requirements was only 23%. This model provides a metric to predict the probability of residents meeting case requirements in an era of changing volume by accounting for unpredictable and inequitable case distribution. It could be applied across operations, procedures, or disease diagnoses and may be particularly useful in developing resident curricula and schedules.
Nonlinear lymphangion pressure-volume relationship minimizes edema
Venugopal, Arun M.; Stewart, Randolph H.; Laine, Glen A.
2010-01-01
Lymphangions, the segments of lymphatic vessel between two valves, contract cyclically and actively pump, analogous to cardiac ventricles. Besides having a discernable systole and diastole, lymphangions have a relatively linear end-systolic pressure-volume relationship (with slope Emax) and a nonlinear end-diastolic pressure-volume relationship (with slope Emin). To counter increased microvascular filtration (causing increased lymphatic inlet pressure), lymphangions must respond to modest increases in transmural pressure by increasing pumping. To counter venous hypertension (causing increased lymphatic inlet and outlet pressures), lymphangions must respond to potentially large increases in transmural pressure by maintaining lymph flow. We therefore hypothesized that the nonlinear lymphangion pressure-volume relationship allows transition from a transmural pressure-dependent stroke volume to a transmural pressure-independent stroke volume as transmural pressure increases. To test this hypothesis, we applied a mathematical model based on the time-varying elastance concept typically applied to ventricles (the ratio of pressure to volume cycles periodically from a minimum, Emin, to a maximum, Emax). This model predicted that lymphangions increase stroke volume and stroke work with transmural pressure if Emin < Emax at low transmural pressures, but maintain stroke volume and stroke work if Emin= Emax at higher transmural pressures. Furthermore, at higher transmural pressures, stroke work is evenly distributed among a chain of lymphangions. Model predictions were tested by comparison to previously reported data. Model predictions were consistent with reported lymphangion properties and pressure-flow relationships of entire lymphatic systems. The nonlinear lymphangion pressure-volume relationship therefore minimizes edema resulting from both increased microvascular filtration and venous hypertension. PMID:20601461
Theoretical Evaluation of Electroactive Polymer Based Micropump Diaphragm for Air Flow Control
NASA Technical Reports Server (NTRS)
Xu, Tian-Bing; Su, Ji; Zhang, Qiming
2004-01-01
An electroactive polymer (EAP), high energy electron irradiated poly(vinylidene fluoride-trifluoroethylene) [P(VDFTrFE)] copolymer, based actuation micropump diaphragm (PAMPD) have been developed for air flow control. The displacement strokes and profiles as a function of amplifier and frequency of electric field have been characterized. The volume stroke rates (volume rate) as function of electric field, driving frequency have been theoretically evaluated, too. The PAMPD exhibits high volume rate. It is easily tuned with varying of either amplitude or frequency of the applied electric field. In addition, the performance of the diaphragms were modeled and the agreement between the modeling results and experimental data confirms that the response of the diaphragms follow the design parameters. The results demonstrated that the diaphragm can fit some future aerospace applications to replace the traditional complex mechanical systems, increase the control capability and reduce the weight of the future air dynamic control systems. KEYWORDS: Electroactive polymer (EAP), micropump, diaphragm, actuation, displacement, volume rate, pumping speed, clamping ratio.
Front tracking based modeling of the solid grain growth on the adaptive control volume grid
NASA Astrophysics Data System (ADS)
Seredyński, Mirosław; Łapka, Piotr
2017-07-01
The paper presents the micro-scale model of unconstrained solidification of the grain immersed in under-cooled liquid, based on the front tracking approach. For this length scale, the interface tracked through the domain is meant as the solid-liquid boundary. To prevent generation of huge meshes the energy transport equation is discretized on the adaptive control volume (c.v.) mesh. The coupling of dynamically changing mesh and moving front position is addressed. Preliminary results of simulation of a test case, the growth of single grain, are presented and discussed.
A novel medical information management and decision model for uncertain demand optimization.
Bi, Ya
2015-01-01
Accurately planning the procurement volume is an effective measure for controlling the medicine inventory cost. Due to uncertain demand it is difficult to make accurate decision on procurement volume. As to the biomedicine sensitive to time and season demand, the uncertain demand fitted by the fuzzy mathematics method is obviously better than general random distribution functions. To establish a novel medical information management and decision model for uncertain demand optimization. A novel optimal management and decision model under uncertain demand has been presented based on fuzzy mathematics and a new comprehensive improved particle swarm algorithm. The optimal management and decision model can effectively reduce the medicine inventory cost. The proposed improved particle swarm optimization is a simple and effective algorithm to improve the Fuzzy interference and hence effectively reduce the calculation complexity of the optimal management and decision model. Therefore the new model can be used for accurate decision on procurement volume under uncertain demand.
A 4DCT imaging-based breathing lung model with relative hysteresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for bothmore » models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. - Highlights: • We developed a breathing human lung CFD model based on 4D-dynamic CT images. • The 4DCT-based breathing lung model is able to capture lung relative hysteresis. • A new boundary condition for lung model based on one static CT image was proposed. • The difference between lung models based on 4D and static CT images was quantified.« less
A representation of an NTCP function for local complication mechanisms
NASA Astrophysics Data System (ADS)
Alber, M.; Nüsslin, F.
2001-02-01
A mathematical formalism was tailored for the description of mechanisms complicating radiation therapy with a predominantly local component. The functional representation of an NTCP function was developed based on the notion that it has to be robust against population averages in order to be applicable to experimental data. The model was required to be invariant under scaling operations of the dose and the irradiated volume. The NTCP function was derived from the model assumptions that the complication is a consequence of local tissue damage and that the probability of local damage in a small reference volume is independent of the neighbouring volumes. The performance of the model was demonstrated with an animal model which has been published previously (Powers et al 1998 Radiother. Oncol. 46 297-306).
Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin
2008-11-01
We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.
Infrasound Waveform Inversion and Mass Flux Validation from Sakurajima Volcano, Japan
NASA Astrophysics Data System (ADS)
Fee, D.; Kim, K.; Yokoo, A.; Izbekov, P. E.; Lopez, T. M.; Prata, F.; Ahonen, P.; Kazahaya, R.; Nakamichi, H.; Iguchi, M.
2015-12-01
Recent advances in numerical wave propagation modeling and station coverage have permitted robust inversion of infrasound data from volcanic explosions. Complex topography and crater morphology have been shown to substantially affect the infrasound waveform, suggesting that homogeneous acoustic propagation assumptions are invalid. Infrasound waveform inversion provides an exciting tool to accurately characterize emission volume and mass flux from both volcanic and non-volcanic explosions. Mass flux, arguably the most sought-after parameter from a volcanic eruption, can be determined from the volume flux using infrasound waveform inversion if the volcanic flow is well-characterized. Thus far, infrasound-based volume and mass flux estimates have yet to be validated. In February 2015 we deployed six infrasound stations around the explosive Sakurajima Volcano, Japan for 8 days. Here we present our full waveform inversion method and volume and mass flux estimates of numerous high amplitude explosions using a high resolution DEM and 3-D Finite Difference Time Domain modeling. Application of this technique to volcanic eruptions may produce realistic estimates of mass flux and plume height necessary for volcanic hazard mitigation. Several ground-based instruments and methods are used to independently determine the volume, composition, and mass flux of individual volcanic explosions. Specifically, we use ground-based ash sampling, multispectral infrared imagery, UV spectrometry, and multigas data to estimate the plume composition and flux. Unique tiltmeter data from underground tunnels at Sakurajima also provides a way to estimate the volume and mass of each explosion. In this presentation we compare the volume and mass flux estimates derived from the different methods and discuss sources of error and future improvements.
Developing a stochastic traffic volume prediction model for public-private partnership projects
NASA Astrophysics Data System (ADS)
Phong, Nguyen Thanh; Likhitruangsilp, Veerasak; Onishi, Masamitsu
2017-11-01
Transportation projects require an enormous amount of capital investment resulting from their tremendous size, complexity, and risk. Due to the limitation of public finances, the private sector is invited to participate in transportation project development. The private sector can entirely or partially invest in transportation projects in the form of Public-Private Partnership (PPP) scheme, which has been an attractive option for several developing countries, including Vietnam. There are many factors affecting the success of PPP projects. The accurate prediction of traffic volume is considered one of the key success factors of PPP transportation projects. However, only few research works investigated how to predict traffic volume over a long period of time. Moreover, conventional traffic volume forecasting methods are usually based on deterministic models which predict a single value of traffic volume but do not consider risk and uncertainty. This knowledge gap makes it difficult for concessionaires to estimate PPP transportation project revenues accurately. The objective of this paper is to develop a probabilistic traffic volume prediction model. First, traffic volumes were estimated following the Geometric Brownian Motion (GBM) process. Monte Carlo technique is then applied to simulate different scenarios. The results show that this stochastic approach can systematically analyze variations in the traffic volume and yield more reliable estimates for PPP projects.
Satellite-based empirical models linking river plume dynamics with hypoxic area andvolume
Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L−1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and ...
The use of biomarkers to describe plasma-, red cell-, and blood volume from a simple blood test.
Lobigs, Louisa Margit; Sottas, Pierre-Edouard; Bourdon, Pitre Collier; Nikolovski, Zoran; El-Gingo, Mohamed; Varamenti, Evdokia; Peeling, Peter; Dawson, Brian; Schumacher, Yorck Olaf
2017-01-01
Plasma volume and red cell mass are key health markers used to monitor numerous disease states, such as heart failure, kidney disease, or sepsis. Nevertheless, there is currently no practically applicable method to easily measure absolute plasma or red cell volumes in a clinical setting. Here, a novel marker for plasma volume and red cell mass was developed through analysis of the observed variability caused by plasma volume shifts in common biochemical measures, selected based on their propensity to present with low variations over time. Once a month for 6 months, serum and whole blood samples were collected from 33 active males. Concurrently, the CO-rebreathing method was applied to determine target levels of hemoglobin mass (HbM) and blood volumes. The variability of 18 common chemistry markers and 27 Full Blood Count variables was investigated and matched to the observed plasma volume variation. After the removal of between-subject variations using a Bayesian model, multivariate analysis identified two sets of 8 and 15 biomarkers explaining 68% and 69% of plasma volume variance, respectively. The final multiparametric model contains a weighting function to allow for isolated abnormalities in single biomarkers. This proof-of-concept investigation describes a novel approach to estimate absolute vascular volumes, with a simple blood test. Despite the physiological instability of critically ill patients, it is hypothesized the model, with its multiparametric approach and weighting function, maintains the capacity to describe vascular volumes. This model has potential to transform volume management in clinical settings. Am. J. Hematol. 92:62-67, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiveland, W.A.; Oberjohn, W.J.; Cornelius, D.K.
1985-12-01
This report summarizes the work conducted during a 30-month contract with the United States Department of Energy (DOE) Pittsburgh Energy Technology Center (PETC). The general objective is to develop and verify a computer code capable of modeling the major aspects of pulverized coal combustion. Achieving this objective will lead to design methods applicable to industrial and utility furnaces. The combustion model (COMO) is based mainly on an existing Babcock and Wilcox (B and W) computer program. The model consists of a number of relatively independent modules that represent the major processes involved in pulverized coal combustion: flow, heterogeneous and homogeneousmore » chemical reaction, and heat transfer. As models are improved or as new ones are developed, this modular structure allows portions of the COMO model to be updated with minimal impact on the remainder of the program. The report consists of two volumes. This volume (Volume 1) contains a technical summary of the COMO model, results of predictions for gas phase combustion, pulverized coal combustion, and a detailed description of the COMO model. Volume 2 is the Users Guide for COMO and contains detailed instructions for preparing the input data and a description of the program output. Several example cases have been included to aid the user in usage of the computer program for pulverized coal applications. 66 refs., 41 figs., 21 tabs.« less
Colonic transit time and pressure based on Bernoulli’s principle
Uno, Yoshiharu
2018-01-01
Purpose Variations in the caliber of human large intestinal tract causes changes in pressure and the velocity of its contents, depending on flow volume, gravity, and density, which are all variables of Bernoulli’s principle. Therefore, it was hypothesized that constipation and diarrhea can occur due to changes in the colonic transit time (CTT), according to Bernoulli’s principle. In addition, it was hypothesized that high amplitude peristaltic contractions (HAPC), which are considered to be involved in defecation in healthy subjects, occur because of cecum pressure based on Bernoulli’s principle. Methods A virtual healthy model (VHM), a virtual constipation model and a virtual diarrhea model were set up. For each model, the CTT was decided according to the length of each part of the colon, and then calculating the velocity due to the cecum inflow volume. In the VHM, the pressure change was calculated, then its consistency with HAPC was verified. Results The CTT changed according to the difference between the cecum inflow volume and the caliber of the intestinal tract, and was inversely proportional to the cecum inflow volume. Compared with VHM, the CTT was prolonged in the virtual constipation model, and shortened in the virtual diarrhea model. The calculated pressure of the VHM and the gradient of the interlocked graph were similar to that of HAPC. Conclusion The CTT and HAPC can be explained by Bernoulli’s principle, and constipation and diarrhea may be fundamentally influenced by flow dynamics. PMID:29670388
Mountris, K A; Bert, J; Noailly, J; Aguilera, A Rodriguez; Valeri, A; Pradier, O; Schick, U; Promayon, E; Ballester, M A Gonzalez; Troccaz, J; Visvikis, D
2017-03-21
Prostate volume changes due to edema occurrence during transperineal permanent brachytherapy should be taken under consideration to ensure optimal dose delivery. Available edema models, based on prostate volume observations, face several limitations. Therefore, patient-specific models need to be developed to accurately account for the impact of edema. In this study we present a biomechanical model developed to reproduce edema resolution patterns documented in the literature. Using the biphasic mixture theory and finite element analysis, the proposed model takes into consideration the mechanical properties of the pubic area tissues in the evolution of prostate edema. The model's computed deformations are incorporated in a Monte Carlo simulation to investigate their effect on post-operative dosimetry. The comparison of Day1 and Day30 dosimetry results demonstrates the capability of the proposed model for patient-specific dosimetry improvements, considering the edema dynamics. The proposed model shows excellent ability to reproduce previously described edema resolution patterns and was validated based on previous findings. According to our results, for a prostate volume increase of 10-20% the Day30 urethra D10 dose metric is higher by 4.2%-10.5% compared to the Day1 value. The introduction of the edema dynamics in Day30 dosimetry shows a significant global dose overestimation identified on the conventional static Day30 dosimetry. In conclusion, the proposed edema biomechanical model can improve the treatment planning of transperineal permanent brachytherapy accounting for post-implant dose alterations during the planning procedure.
Rheology of Carbon Fibre Reinforced Cement-Based Mortar
NASA Astrophysics Data System (ADS)
Banfill, Phillip F. G.; Starrs, Gerry; McCarter, W. John
2008-07-01
Carbon fibre reinforced cement based materials (CFRCs) offer the possibility of fabricating "smart" electrically conductive materials. Rheology of the fresh mix is crucial to satisfactory moulding and fresh CFRC conforms to the Bingham model with slight structural breakdown. Both yield stress and plastic viscosity increase with increasing fibre length and volume concentration. Using a modified Viskomat NT, the concentration dependence of CFRC rheology up to 1.5% fibre volume is reported.
Ohashi, Hidenori; Tamaki, Takanori; Yamaguchi, Takeo
2011-12-29
Molecular collisions, which are the microscopic origin of molecular diffusive motion, are affected by both the molecular surface area and the distance between molecules. Their product can be regarded as the free space around a penetrant molecule defined as the "shell-like free volume" and can be taken as a characteristic of molecular collisions. On the basis of this notion, a new diffusion theory has been developed. The model can predict molecular diffusivity in polymeric systems using only well-defined single-component parameters of molecular volume, molecular surface area, free volume, and pre-exponential factors. By consideration of the physical description of the model, the actual body moved and which neighbor molecules are collided with are the volume and the surface area of the penetrant molecular core. In the present study, a semiempirical quantum chemical calculation was used to calculate both of these parameters. The model and the newly developed parameters offer fairly good predictive ability. © 2011 American Chemical Society
NASA Astrophysics Data System (ADS)
2013-01-01
Due to a production error, the article 'Corrigendum: Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard' by Abhinav K Jha, Matthew A Kupinski, Jeffrey J Rodriguez, Renu M Stephen and Alison T Stopeck was duplicated and the article 'Corrigendum: Complete electrode model in EEG: relationship and differences to the point electrode model' by S Pursiainen, F Lucka and C H Wolters was omitted in the print version of Physics in Medicine & Biology, volume 58, issue 1. The online versions of both articles are not affected. The article 'Corrigendum: Complete electrode model in EEG: relationship and differences to the point electrode model' by S Pursiainen, F Lucka and C H Wolters will be included in the print version of this issue (Physics in Medicine & Biology, volume 58, issue 2.) We apologise unreservedly for this error. Jon Ruffle Publisher
Effects of stiffness and volume on the transit time of an erythrocyte through a slit.
Salehyar, Sara; Zhu, Qiang
2017-06-01
By using a fully coupled fluid-cell interaction model, we numerically simulate the dynamic process of a red blood cell passing through a slit driven by an incoming flow. The model is achieved by combining a multiscale model of the composite cell membrane with a boundary element fluid dynamics model based on the Stokes flow assumption. Our concentration is on the correlation between the transit time (the time it takes to finish the whole translocation process) and different conditions (flow speed, cell orientation, cell stiffness, cell volume, etc.) that are involved. According to the numerical prediction (with some exceptions), the transit time rises as the cell is stiffened. It is also highly sensitive to volume increase inside the cell. In general, even slightly swollen cells (i.e., the internal volume is increased while the surface area of the cell kept unchanged) travel dramatically slower through the slit. For these cells, there is also an increased chance of blockage.
NASA Astrophysics Data System (ADS)
Negahdar, Mohammadreza; Beymer, David; Syeda-Mahmood, Tanveer
2018-02-01
Deep Learning models such as Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in 2D medical image analysis. In clinical practice; however, most analyzed and acquired medical data are formed of 3D volumes. In this paper, we present a fast and efficient 3D lung segmentation method based on V-net: a purely volumetric fully CNN. Our model is trained on chest CT images through volume to volume learning, which palliates overfitting problem on limited number of annotated training data. Adopting a pre-processing step and training an objective function based on Dice coefficient addresses the imbalance between the number of lung voxels against that of background. We have leveraged Vnet model by using batch normalization for training which enables us to use higher learning rate and accelerates the training of the model. To address the inadequacy of training data and obtain better robustness, we augment the data applying random linear and non-linear transformations. Experimental results on two challenging medical image data show that our proposed method achieved competitive result with a much faster speed.
Liu, Xingguo; Niu, Jianwei; Ran, Linghua; Liu, Taijie
2017-08-01
This study aimed to develop estimation formulae for the total human body volume (BV) of adult males using anthropometric measurements based on a three-dimensional (3D) scanning technique. Noninvasive and reliable methods to predict the total BV from anthropometric measurements based on a 3D scan technique were addressed in detail. A regression analysis of BV based on four key measurements was conducted for approximately 160 adult male subjects. Eight total models of human BV show that the predicted results fitted by the regression models were highly correlated with the actual BV (p < 0.001). Two metrics, the mean value of the absolute difference between the actual and predicted BV (V error ) and the mean value of the ratio between V error and actual BV (RV error ), were calculated. The linear model based on human weight was recommended as the most optimal due to its simplicity and high efficiency. The proposed estimation formulae are valuable for estimating total body volume in circumstances in which traditional underwater weighing or air displacement plethysmography is not applicable or accessible. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
Emergent neutrality drives phytoplankton species coexistence
Segura, Angel M.; Calliari, Danilo; Kruk, Carla; Conde, Daniel; Bonilla, Sylvia; Fort, Hugo
2011-01-01
The mechanisms that drive species coexistence and community dynamics have long puzzled ecologists. Here, we explain species coexistence, size structure and diversity patterns in a phytoplankton community using a combination of four fundamental factors: organism traits, size-based constraints, hydrology and species competition. Using a ‘microscopic’ Lotka–Volterra competition (MLVC) model (i.e. with explicit recipes to compute its parameters), we provide a mechanistic explanation of species coexistence along a niche axis (i.e. organismic volume). We based our model on empirically measured quantities, minimal ecological assumptions and stochastic processes. In nature, we found aggregated patterns of species biovolume (i.e. clumps) along the volume axis and a peak in species richness. Both patterns were reproduced by the MLVC model. Observed clumps corresponded to niche zones (volumes) where species fitness was highest, or where fitness was equal among competing species. The latter implies the action of equalizing processes, which would suggest emergent neutrality as a plausible mechanism to explain community patterns. PMID:21177680
Scan-based volume animation driven by locally adaptive articulated registrations.
Rhee, Taehyun; Lewis, J P; Neumann, Ulrich; Nayak, Krishna S
2011-03-01
This paper describes a complete system to create anatomically accurate example-based volume deformation and animation of articulated body regions, starting from multiple in vivo volume scans of a specific individual. In order to solve the correspondence problem across volume scans, a template volume is registered to each sample. The wide range of pose variations is first approximated by volume blend deformation (VBD), providing proper initialization of the articulated subject in different poses. A novel registration method is presented to efficiently reduce the computation cost while avoiding strong local minima inherent in complex articulated body volume registration. The algorithm highly constrains the degrees of freedom and search space involved in the nonlinear optimization, using hierarchical volume structures and locally constrained deformation based on the biharmonic clamped spline. Our registration step establishes a correspondence across scans, allowing a data-driven deformation approach in the volume domain. The results provide an occlusion-free person-specific 3D human body model, asymptotically accurate inner tissue deformations, and realistic volume animation of articulated movements driven by standard joint control estimated from the actual skeleton. Our approach also addresses the practical issues arising in using scans from living subjects. The robustness of our algorithms is tested by their applications on the hand, probably the most complex articulated region in the body, and the knee, a frequent subject area for medical imaging due to injuries. © 2011 IEEE
Boswell, C. Andrew; Ferl, Gregory Z.; Mundo, Eduardo E.; Bumbaca, Daniela; Schweiger, Michelle G.; Theil, Frank-Peter; Fielder, Paul J.; Khawli, Leslie A.
2011-01-01
Background The identification of clinically meaningful and predictive models of disposition kinetics for cancer therapeutics is an ongoing pursuit in drug development. In particular, the growing interest in preclinical evaluation of anti-angiogenic agents alone or in combination with other drugs requires a complete understanding of the associated physiological consequences. Methodology/Principal Findings Technescan™ PYP™, a clinically utilized radiopharmaceutical, was used to measure tissue vascular volumes in beige nude mice that were naïve or administered a single intravenous bolus dose of a murine anti-vascular endothelial growth factor (anti-VEGF) antibody (10 mg/kg) 24 h prior to assay. Anti-VEGF had no significant effect (p>0.05) on the fractional vascular volumes of any tissues studied; these findings were further supported by single photon emission computed tomographic imaging. In addition, apart from a borderline significant increase (p = 0.048) in mean hepatic blood flow, no significant anti-VEGF-induced differences were observed (p>0.05) in two additional physiological parameters, interstitial fluid volume and the organ blood flow rate, measured using indium-111-pentetate and rubidium-86 chloride, respectively. Areas under the concentration-time curves generated by a physiologically-based pharmacokinetic model changed substantially (>25%) in several tissues when model parameters describing compartmental volumes and blood flow rates were switched from literature to our experimentally derived values. However, negligible changes in predicted tissue exposure were observed when comparing simulations based on parameters measured in naïve versus anti-VEGF-administered mice. Conclusions/Significance These observations may foster an enhanced understanding of anti-VEGF effects in murine tissues and, in particular, may be useful in modeling antibody uptake alone or in combination with anti-VEGF. PMID:21436893
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donnelly, H.; Fullwood, R.; Glancy, J.
This is the second volume of a two volume report on the VISA method for evaluating safeguards at fixed-site facilities. This volume contains appendices that support the description of the VISA concept and the initial working version of the method, VISA-1, presented in Volume I. The information is separated into four appendices, each describing details of one of the four analysis modules that comprise the analysis sections of the method. The first appendix discusses Path Analysis methodology, applies it to a Model Fuel Facility, and describes the computer codes that are being used. Introductory material on Path Analysis given inmore » Chapter 3.2.1 and Chapter 4.2.1 of Volume I. The second appendix deals with Detection Analysis, specifically the schemes used in VISA-1 for classifying adversaries and the methods proposed for evaluating individual detection mechanisms in order to build the data base required for detection analysis. Examples of evaluations on identity-access systems, SNM portal monitors, and intrusion devices are provided. The third appendix describes the Containment Analysis overt-segment path ranking, the Monte Carlo engagement model, the network simulation code, the delay mechanism data base, and the results of a sensitivity analysis. The last appendix presents general equations used in Interruption Analysis for combining covert-overt segments and compares them with equations given in Volume I, Chapter 3.« less
Using Model-Based Reasoning for Autonomous Instrument Operation - Lessons Learned From IMAGE/LENA
NASA Technical Reports Server (NTRS)
Johnson, Michael A.; Rilee, Michael L.; Truszkowski, Walt; Bailin, Sidney C.
2001-01-01
Model-based reasoning has been applied as an autonomous control strategy on the Low Energy Neutral Atom (LENA) instrument currently flying on board the Imager for Magnetosphere-to-Aurora Global Exploration (IMAGE) spacecraft. Explicit models of instrument subsystem responses have been constructed and are used to dynamically adapt the instrument to the spacecraft's environment. These functions are cast as part of a Virtual Principal Investigator (VPI) that autonomously monitors and controls the instrument. In the VPI's current implementation, LENA's command uplink volume has been decreased significantly from its previous volume; typically, no uplinks are required for operations. This work demonstrates that a model-based approach can be used to enhance science instrument effectiveness. The components of LENA are common in space science instrumentation, and lessons learned by modeling this system may be applied to other instruments. Future work involves the extension of these methods to cover more aspects of LENA operation and the generalization to other space science instrumentation.
Barba-J, Leiner; Escalante-Ramírez, Boris; Vallejo Venegas, Enrique; Arámbula Cosío, Fernando
2018-05-01
Analysis of cardiac images is a fundamental task to diagnose heart problems. Left ventricle (LV) is one of the most important heart structures used for cardiac evaluation. In this work, we propose a novel 3D hierarchical multiscale segmentation method based on a local active contour (AC) model and the Hermite transform (HT) for LV analysis in cardiac magnetic resonance (MR) and computed tomography (CT) volumes in short axis view. Features such as directional edges, texture, and intensities are analyzed using the multiscale HT space. A local AC model is configured using the HT coefficients and geometrical constraints. The endocardial and epicardial boundaries are used for evaluation. Segmentation of the endocardium is controlled using elliptical shape constraints. The final endocardial shape is used to define the geometrical constraints for segmentation of the epicardium. We follow the assumption that epicardial and endocardial shapes are similar in volumes with short axis view. An initialization scheme based on a fuzzy C-means algorithm and mathematical morphology was designed. The algorithm performance was evaluated using cardiac MR and CT volumes in short axis view demonstrating the feasibility of the proposed method.
Frandsen, Michael W.; Wessol, Daniel E.; Wheeler, Floyd J.
2001-01-16
Methods and computer executable instructions are disclosed for ultimately developing a dosimetry plan for a treatment volume targeted for irradiation during cancer therapy. The dosimetry plan is available in "real-time" which especially enhances clinical use for in vivo applications. The real-time is achieved because of the novel geometric model constructed for the planned treatment volume which, in turn, allows for rapid calculations to be performed for simulated movements of particles along particle tracks there through. The particles are exemplary representations of neutrons emanating from a neutron source during BNCT. In a preferred embodiment, a medical image having a plurality of pixels of information representative of a treatment volume is obtained. The pixels are: (i) converted into a plurality of substantially uniform volume elements having substantially the same shape and volume of the pixels; and (ii) arranged into a geometric model of the treatment volume. An anatomical material associated with each uniform volume element is defined and stored. Thereafter, a movement of a particle along a particle track is defined through the geometric model along a primary direction of movement that begins in a starting element of the uniform volume elements and traverses to a next element of the uniform volume elements. The particle movement along the particle track is effectuated in integer based increments along the primary direction of movement until a position of intersection occurs that represents a condition where the anatomical material of the next element is substantially different from the anatomical material of the starting element. This position of intersection is then useful for indicating whether a neutron has been captured, scattered or exited from the geometric model. From this intersection, a distribution of radiation doses can be computed for use in the cancer therapy. The foregoing represents an advance in computational times by multiple factors of time magnitudes.
NASA Astrophysics Data System (ADS)
Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart
2015-02-01
This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51 ± 1.92) to (97.27 ± 0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.
Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L; Beauchemin, Steven S; Rodrigues, George; Gaede, Stewart
2015-02-21
This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51 ± 1.92) to (97.27 ± 0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.
Phase-field simulations of coherent precipitate morphologies and coarsening kinetics
NASA Astrophysics Data System (ADS)
Vaithyanathan, Venugopalan
2002-09-01
The primary aim of this research is to enhance the fundamental understanding of coherent precipitation reactions in advanced metallic alloys. The emphasis is on a particular class of precipitation reactions which result in ordered intermetallic precipitates embedded in a disordered matrix. These precipitation reactions underlie the development of high-temperature Ni-base superalloys and ultra-light aluminum alloys. Phase-field approach, which has emerged as the method of choice for modeling microstructure evolution, is employed for this research with the focus on factors that control the precipitate morphologies and coarsening kinetics, such as precipitate volume fractions and lattice mismatch between precipitates and matrix. Two types of alloy systems are considered. The first involves L1 2 ordered precipitates in a disordered cubic matrix, in an attempt to model the gamma' precipitates in Ni-base superalloys and delta' precipitates in Al-Li alloys. The effect of volume fraction on coarsening kinetics of gamma' precipitates was investigated using two-dimensional (2D) computer simulations. With increase in volume fraction, larger fractions of precipitates were found to have smaller aspect ratios in the late stages of coarsening, and the precipitate size distributions became wider and more positively skewed. The most interesting result was associated with the effect of volume fraction on the coarsening rate constant. Coarsening rate constant as a function of volume fraction extracted from the cubic growth law of average half-edge length was found to exhibit three distinct regimes: anomalous behavior or decreasing rate constant with volume fraction at small volume fractions ( ≲ 20%), volume fraction independent or constant behavior for intermediate volume fractions (˜20--50%), and the normal behavior or increasing rate constant with volume fraction for large volume fractions ( ≳ 50%). The second alloy system considered was Al-Cu with the focus on understanding precipitation of metastable tetragonal theta'-Al 2Cu in a cubic Al solid solution matrix. In collaboration with Chris Wolverton at Ford Motor Company, a multiscale model, which involves a novel combination of first-principles atomistic calculations with a mesoscale phase-field microstructure model, was developed. Reliable energetics in the form of bulk free energy, interfacial energy and parameters for calculating the elastic energy were obtained using accurate first-principles calculations. (Abstract shortened by UMI.)
Forecasting daily patient volumes in the emergency department.
Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L
2008-02-01
Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.
26th JANNAF Airbreathing Propulsion Subcommittee Meeting. Volume 1
NASA Technical Reports Server (NTRS)
Fry, Ronald S. (Editor); Gannaway, Mary T. (Editor)
2002-01-01
This volume, the first of four volumes, is a collection of 28 unclassified/unlimited-distribution papers which were presented at the Joint Army-Navy-NASA-Air Force (JANNAF) 26th Airbreathing Propulsion Subcommittee (APS) was held jointly with the 38th Combustion Subcommittee (CS), 20th Propulsion Systems Hazards Subcommittee (PSHS), and 2nd Modeling and Simulation Subcommittee. The meeting was held 8-12 April 2002 at the Bayside Inn at The Sandestin Golf & Beach Resort and Eglin Air Force Base, Destin, Florida. Topics covered include: scramjet and ramjet R&D program overviews; tactical propulsion; space access; NASA GTX status; PDE technology; actively cooled engine structures; modeling and simulation of complex hydrocarbon fuels and unsteady processes; and component modeling and simulation.
Volume-based characterization of postocclusion surge.
Zacharias, Jaime; Zacharias, Sergio
2005-10-01
To propose an alternative method to characterize postocclusion surge using a collapsible artificial anterior chamber to replace the currently used rigid anterior chamber model. Fundación Oftamológica Los Andes, Santiago, Chile. The distal end of a phacoemulsification handpiece was placed inside a compliant artificial anterior chamber. Digital recordings of chamber pressure, chamber volume, inflow, and outflow were performed during occlusion break of the phacoemulsification tip. The occlusion break profile of 2 different consoles was compared. Occlusion break while using a rigid anterior chamber model produced a simultaneous increase of chamber inflow and outflow. In the rigid chamber model, pressure decreased sharply, reaching negative pressure values. Alternatively, with the collapsible chamber model, a delay was observed in the inflow that occurs to compensate the outflow surge. Also, the chamber pressure drop was smaller in magnitude, never undershooting below atmospheric pressure into negative values. Using 500 mm Hg as vacuum limit, the Infiniti System (Alcon) performed better that the Legacy (Alcon), showing an 18% reduction in peak volume variation. The collapsible anterior chamber model provides a more realistic representation of the postocclusion surge events that occur in the real eye during cataract surgery. Peak volume fluctuation (mL), half volume recovery time(s), and volume fluctuation integral value (mL x s) are proposed as realistic indicators to characterize the postocclusion surge performance. These indicators show that the Infiniti System has a better postocclusion surge behavior than the Legacy System.
Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.
Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz
2015-01-01
This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.
Correlating Free-Volume Hole Distribution to the Glass Transition Temperature of Epoxy Polymers.
Aramoon, Amin; Breitzman, Timothy D; Woodward, Christopher; El-Awady, Jaafar A
2017-09-07
A new algorithm is developed to quantify the free-volume hole distribution and its evolution in coarse-grained molecular dynamics simulations of polymeric networks. This is achieved by analyzing the geometry of the network rather than a voxelized image of the structure to accurately and efficiently find and quantify free-volume hole distributions within large scale simulations of polymer networks. The free-volume holes are quantified by fitting the largest ellipsoids and spheres in the free-volumes between polymer chains. The free-volume hole distributions calculated from this algorithm are shown to be in excellent agreement with those measured from positron annihilation lifetime spectroscopy (PALS) experiments at different temperature and pressures. Based on the results predicted using this algorithm, an evolution model is proposed for the thermal behavior of an individual free-volume hole. This model is calibrated such that the average radius of free-volumes holes mimics the one predicted from the simulations. The model is then employed to predict the glass-transition temperature of epoxy polymers with different degrees of cross-linking and lengths of prepolymers. Comparison between the predicted glass-transition temperatures and those measured from simulations or experiments implies that this model is capable of successfully predicting the glass-transition temperature of the material using only a PDF of the initial free-volume holes radii of each microstructure. This provides an effective approach for the optimized design of polymeric systems on the basis of the glass-transition temperature, degree of cross-linking, and average length of prepolymers.
Turunen, Siru M.; Han, Sang Kuy; Herzog, Walter; Korhonen, Rami K.
2013-01-01
The aim of this study was to investigate if the experimentally detected altered chondrocyte volumetric behavior in early osteoarthritis can be explained by changes in the extracellular and pericellular matrix properties of cartilage. Based on our own experimental tests and the literature, the structural and mechanical parameters for normal and osteoarthritic cartilage were implemented into a multiscale fibril-reinforced poroelastic swelling model. Model simulations were compared with experimentally observed cell volume changes in mechanically loaded cartilage, obtained from anterior cruciate ligament transected rabbit knees. We found that the cell volume increased by 7% in the osteoarthritic cartilage model following mechanical loading of the tissue. In contrast, the cell volume decreased by 4% in normal cartilage model. These findings were consistent with the experimental results. Increased local transversal tissue strain due to the reduced collagen fibril stiffness accompanied with the reduced fixed charge density of the pericellular matrix could increase the cell volume up to 12%. These findings suggest that the increase in the cell volume in mechanically loaded osteoarthritic cartilage is primarily explained by the reduction in the pericellular fixed charge density, while the superficial collagen fibril stiffness is suggested to contribute secondarily to the cell volume behavior. PMID:23634175
Obi, Andrea; Chung, Jennifer; Chen, Ryan; Lin, Wandi; Sun, Siyuan; Pozehl, William; Cohn, Amy M; Daskin, Mark S; Seagull, F Jacob; Reddy, Rishindra M
2015-11-01
Certain operative cases occur unpredictably and/or have long operative times, creating a conflict between Accreditation Council for Graduate Medical Education (ACGME) rules and adequate training experience. A ProModel-based simulation was developed based on historical data. Probabilistic distributions of operative time calculated and combined with an ACGME compliant call schedule. For the advanced surgical cases modeled (cardiothoracic transplants), 80-hour violations were 6.07% and the minimum number of days off was violated 22.50%. There was a 36% chance of failure to fulfill any (either heart or lung) minimum case requirement despite adequate volume. The variable nature of emergency cases inevitably leads to work hour violations under ACGME regulations. Unpredictable cases mandate higher operative volume to ensure achievement of adequate caseloads. Publically available simulation technology provides a valuable avenue to identify adequacy of case volumes for trainees in both the elective and emergency setting. Copyright © 2015 Elsevier Inc. All rights reserved.
Bråtane, Bernt Tore; Bastan, Birgul; Fisher, Marc; Bouley, James; Henninger, Nils
2009-07-07
Though diffusion weighted imaging (DWI) is frequently used for identifying the ischemic lesion in focal cerebral ischemia, the understanding of spatiotemporal evolution patterns observed with different analysis methods remains imprecise. DWI and calculated apparent diffusion coefficient (ADC) maps were serially obtained in rat stroke models (MCAO): permanent, 90 min, and 180 min temporary MCAO. Lesion volumes were analyzed in a blinded and randomized manner by 2 investigators using (i) a previously validated ADC threshold, (ii) visual determination of hypointense regions on ADC maps, and (iii) visual determination of hyperintense regions on DWI. Lesion volumes were correlated with 24 hour 2,3,5-triphenyltetrazoliumchloride (TTC)-derived infarct volumes. TTC-derived infarct volumes were not significantly different from the ADC and DWI-derived lesion volumes at the last imaging time points except for significantly smaller DWI lesions in the pMCAO model (p=0.02). Volumetric calculation based on TTC-derived infarct also correlated significantly stronger to volumetric calculation based on last imaging time point derived lesions on ADC maps than DWI (p<0.05). Following reperfusion, lesion volumes on the ADC maps significantly reduced but no change was observed on DWI. Visually determined lesion volumes on ADC maps and DWI by both investigators correlated significantly with threshold-derived lesion volumes on ADC maps with the former method demonstrating a stronger correlation. There was also a better interrater agreement for ADC map analysis than for DWI analysis. Ischemic lesion determination by ADC was more accurate in final infarct prediction, rater independent, and provided exclusive information on ischemic lesion reversibility.
Precht, H; Kitslaar, P H; Broersen, A; Gerke, O; Dijkstra, J; Thygesen, J; Egstrup, K; Lambrechtsen, J
2017-02-01
Investigate the influence of adaptive statistical iterative reconstruction (ASIR) and the model-based IR (Veo) reconstruction algorithm in coronary computed tomography angiography (CCTA) images on quantitative measurements in coronary arteries for plaque volumes and intensities. Three patients had three independent dose reduced CCTA performed and reconstructed with 30% ASIR (CTDI vol at 6.7 mGy), 60% ASIR (CTDI vol 4.3 mGy) and Veo (CTDI vol at 1.9 mGy). Coronary plaque analysis was performed for each measured CCTA volumes, plaque burden and intensities. Plaque volume and plaque burden show a decreasing tendency from ASIR to Veo as median volume for ASIR is 314 mm 3 and 337 mm 3 -252 mm 3 for Veo and plaque burden is 42% and 44% for ASIR to 39% for Veo. The lumen and vessel volume decrease slightly from 30% ASIR to 60% ASIR with 498 mm 3 -391 mm 3 for lumen volume and vessel volume from 939 mm 3 to 830 mm 3 . The intensities did not change overall between the different reconstructions for either lumen or plaque. We found a tendency of decreasing plaque volumes and plaque burden but no change in intensities with the use of low dose Veo CCTA (1.9 mGy) compared to dose reduced ASIR CCTA (6.7 mGy & 4.3 mGy), although more studies are warranted. Copyright © 2016 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.
DOT National Transportation Integrated Search
1978-04-01
Volume 2 defines a new algorithm for the network equilibrium model that works in the space of path flows and is based on the theory of fixed point method. The goals of the study were broadly defined as the identification of aggregation practices and ...
NASA Astrophysics Data System (ADS)
Mountris, K. A.; Bert, J.; Noailly, J.; Rodriguez Aguilera, A.; Valeri, A.; Pradier, O.; Schick, U.; Promayon, E.; Gonzalez Ballester, M. A.; Troccaz, J.; Visvikis, D.
2017-03-01
Prostate volume changes due to edema occurrence during transperineal permanent brachytherapy should be taken under consideration to ensure optimal dose delivery. Available edema models, based on prostate volume observations, face several limitations. Therefore, patient-specific models need to be developed to accurately account for the impact of edema. In this study we present a biomechanical model developed to reproduce edema resolution patterns documented in the literature. Using the biphasic mixture theory and finite element analysis, the proposed model takes into consideration the mechanical properties of the pubic area tissues in the evolution of prostate edema. The model’s computed deformations are incorporated in a Monte Carlo simulation to investigate their effect on post-operative dosimetry. The comparison of Day1 and Day30 dosimetry results demonstrates the capability of the proposed model for patient-specific dosimetry improvements, considering the edema dynamics. The proposed model shows excellent ability to reproduce previously described edema resolution patterns and was validated based on previous findings. According to our results, for a prostate volume increase of 10-20% the Day30 urethra D10 dose metric is higher by 4.2%-10.5% compared to the Day1 value. The introduction of the edema dynamics in Day30 dosimetry shows a significant global dose overestimation identified on the conventional static Day30 dosimetry. In conclusion, the proposed edema biomechanical model can improve the treatment planning of transperineal permanent brachytherapy accounting for post-implant dose alterations during the planning procedure.
Multiclassifier fusion in human brain MR segmentation: modelling convergence.
Heckemann, Rolf A; Hajnal, Joseph V; Aljabar, Paul; Rueckert, Daniel; Hammers, Alexander
2006-01-01
Segmentations of MR images of the human brain can be generated by propagating an existing atlas label volume to the target image. By fusing multiple propagated label volumes, the segmentation can be improved. We developed a model that predicts the improvement of labelling accuracy and precision based on the number of segmentations used as input. Using a cross-validation study on brain image data as well as numerical simulations, we verified the model. Fit parameters of this model are potential indicators of the quality of a given label propagation method or the consistency of the input segmentations used.
Evaluation of Intersection Traffic Control Measures through Simulation
NASA Astrophysics Data System (ADS)
Asaithambi, Gowri; Sivanandan, R.
2015-12-01
Modeling traffic flow is stochastic in nature due to randomness in variables such as vehicle arrivals and speeds. Due to this and due to complex vehicular interactions and their manoeuvres, it is extremely difficult to model the traffic flow through analytical methods. To study this type of complex traffic system and vehicle interactions, simulation is considered as an effective tool. Application of homogeneous traffic models to heterogeneous traffic may not be able to capture the complex manoeuvres and interactions in such flows. Hence, a microscopic simulation model for heterogeneous traffic is developed using object oriented concepts. This simulation model acts as a tool for evaluating various control measures at signalized intersections. The present study focuses on the evaluation of Right Turn Lane (RTL) and Channelised Left Turn Lane (CLTL). A sensitivity analysis was performed to evaluate RTL and CLTL by varying the approach volumes, turn proportions and turn lane lengths. RTL is found to be advantageous only up to certain approach volumes and right-turn proportions, beyond which it is counter-productive. CLTL is found to be advantageous for lower approach volumes for all turn proportions, signifying the benefits of CLTL. It is counter-productive for higher approach volume and lower turn proportions. This study pinpoints the break-even points for various scenarios. The developed simulation model can be used as an appropriate intersection lane control tool for enhancing the efficiency of flow at intersections. This model can also be employed for scenario analysis and can be valuable to field traffic engineers in implementing vehicle-type based and lane-based traffic control measures.
Binzoni, T; Leung, T S; Rüfenacht, D; Delpy, D T
2006-01-21
Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware.
Dual-energy X-ray absorptiometry–based body volume measurement for 4-compartment body composition123
Wilson, Joseph P; Mulligan, Kathleen; Fan, Bo; Sherman, Jennifer L; Murphy, Elizabeth J; Tai, Viva W; Powers, Cassidy L; Marquez, Lorena; Ruiz-Barros, Viviana
2012-01-01
Background: Total body volume (TBV), with the exclusion of internal air voids, is necessary to quantify body composition in Lohman's 4-compartment (4C) model. Objective: This investigation sought to derive a novel, TBV measure with the use of only dual-energy X-ray absorptiometry (DXA) attenuation values for use in Lohman's 4C body composition model. Design: Pixel-specific masses and volumes were calculated from low- and high-energy attenuation values with the use of first principle conversions of mass attenuation coefficients. Pixel masses and volumes were summed to derive body mass and total body volume. As proof of concept, 11 participants were recruited to have 4C measures taken: DXA, air-displacement plethysmography (ADP), and total body water (TBW). TBV measures with the use of only DXA (DXA-volume) and ADP-volume measures were compared for each participant. To see how body composition estimates were affected by these 2 methods, we used Lohman's 4C model to quantify percentage fat measures for each participant and compared them with conventional DXA measures. Results: DXA-volume and ADP-volume measures were highly correlated (R2 = 0.99) and showed no statistically significant bias. Percentage fat by DXA volume was highly correlated with ADP-volume percentage fat measures and DXA software-reported percentage fat measures (R2 = 0.96 and R2 = 0.98, respectively) but were slightly biased. Conclusions: A novel method to calculate TBV with the use of a clinical DXA system was developed, compared against ADP as proof of principle, and used in Lohman's 4C body composition model. The DXA-volume approach eliminates many of the inherent inaccuracies associated with displacement measures for volume and, if validated in larger groups of participants, would simplify the acquisition of 4C body composition to a single DXA scan and TBW measure. PMID:22134952
Development of an EVA systems cost model. Volume 3: EVA systems cost model
NASA Technical Reports Server (NTRS)
1975-01-01
The EVA systems cost model presented is based on proposed EVA equipment for the space shuttle program. General information on EVA crewman requirements in a weightless environment and an EVA capabilities overview are provided.
Nonlinear mesomechanics of composites with periodic microstructure
NASA Technical Reports Server (NTRS)
Walker, Kevin P.; Jordan, Eric H.; Freed, Alan D.
1989-01-01
This work is concerned with modeling the mechanical deformation or constitutive behavior of composites comprised of a periodic microstructure under small displacement conditions at elevated temperature. A mesomechanics approach is adopted which relates the microimechanical behavior of the heterogeneous composite with its in-service macroscopic behavior. Two different methods, one based on a Fourier series approach and the other on a Green's function approach, are used in modeling the micromechanical behavior of the composite material. Although the constitutive formulations are based on a micromechanical approach, it should be stressed that the resulting equations are volume averaged to produce overall effective constitutive relations which relate the bulk, volume averaged, stress increment to the bulk, volume averaged, strain increment. As such, they are macromodels which can be used directly in nonlinear finite element programs such as MARC, ANSYS and ABAQUS or in boundary element programs such as BEST3D. In developing the volume averaged or efective macromodels from the micromechanical models, both approaches will require the evaluation of volume integrals containing the spatially varying strain distributions throughout the composite material. By assuming that the strain distributions are spatially constant within each constituent phase-or within a given subvolume within each constituent phase-of the composite material, the volume integrals can be obtained in closed form. This simplified micromodel can then be volume averaged to obtain an effective macromodel suitable for use in the MARC, ANSYS and ABAQUS nonlinear finite element programs via user constitutive subroutines such as HYPELA and CMUSER. This effective macromodel can be used in a nonlinear finite element structural analysis to obtain the strain-temperature history at those points in the structure where thermomechanical cracking and damage are expected to occur, the so called damage critical points of the structure.
Nguyen, Hien D; Ullmann, Jeremy F P; McLachlan, Geoffrey J; Voleti, Venkatakaushik; Li, Wenze; Hillman, Elizabeth M C; Reutens, David C; Janke, Andrew L
2018-02-01
Calcium is a ubiquitous messenger in neural signaling events. An increasing number of techniques are enabling visualization of neurological activity in animal models via luminescent proteins that bind to calcium ions. These techniques generate large volumes of spatially correlated time series. A model-based functional data analysis methodology via Gaussian mixtures is suggested for the clustering of data from such visualizations is proposed. The methodology is theoretically justified and a computationally efficient approach to estimation is suggested. An example analysis of a zebrafish imaging experiment is presented.
Imai, Takashi; Ohyama, Shusaku; Kovalenko, Andriy; Hirata, Fumio
2007-01-01
The partial molar volume (PMV) change associated with the pressure-induced structural transition of ubiquitin is analyzed by the three-dimensional reference interaction site model (3D-RISM) theory of molecular solvation. The theory predicts that the PMV decreases upon the structural transition, which is consistent with the experimental observation. The volume decomposition analysis demonstrates that the PMV reduction is primarily caused by the decrease in the volume of structural voids in the protein, which is partially canceled by the volume expansion due to the hydration effects. It is found from further analysis that the PMV reduction is ascribed substantially to the penetration of water molecules into a specific part of the protein. Based on the thermodynamic relation, this result implies that the water penetration causes the pressure-induced structural transition. It supports the water penetration model of pressure denaturation of proteins proposed earlier. PMID:17660257
Imai, Takashi; Ohyama, Shusaku; Kovalenko, Andriy; Hirata, Fumio
2007-09-01
The partial molar volume (PMV) change associated with the pressure-induced structural transition of ubiquitin is analyzed by the three-dimensional reference interaction site model (3D-RISM) theory of molecular solvation. The theory predicts that the PMV decreases upon the structural transition, which is consistent with the experimental observation. The volume decomposition analysis demonstrates that the PMV reduction is primarily caused by the decrease in the volume of structural voids in the protein, which is partially canceled by the volume expansion due to the hydration effects. It is found from further analysis that the PMV reduction is ascribed substantially to the penetration of water molecules into a specific part of the protein. Based on the thermodynamic relation, this result implies that the water penetration causes the pressure-induced structural transition. It supports the water penetration model of pressure denaturation of proteins proposed earlier.
Asset price and trade volume relation in artificial market impacted by value investors
NASA Astrophysics Data System (ADS)
Tangmongkollert, K.; Suwanna, S.
2016-05-01
The relationship between return and trade volume has been of great interests in a financial market. The appearance of asymmetry in the price-volume relation in the bull and bear market is still unsettled. We present a model of the value investor traders (VIs) in the double auction system, in which agents make trading decision based on the pseudo fundamental price modelled by sawtooth oscillations. We investigate the system by two different time series for the asset fundamental price: one corresponds to the fundamental price in a growing phase; and the other corresponds to that in a declining phase. The simulation results show that the trade volume is proportional to the difference between the market price and the fundamental price, and that there is asymmetry between the buying and selling phases. Furthermore, the selling phase has more significant impact of price on the trade volume than the buying phase.
Neukamm, Christian; Try, Kirsti; Norgård, Gunnar; Brun, Henrik
2014-01-01
A technique that uses two-dimensional images to create a knowledge-based, three-dimensional model was tested and compared to magnetic resonance imaging. Measurement of right ventricular volumes and function is important in the follow-up of patients after pulmonary valve replacement. Magnetic resonance imaging is the gold standard for volumetric assessment. Echocardiographic methods have been validated and are attractive alternatives. Thirty patients with tetralogy of Fallot (25 ± 14 years) after pulmonary valve replacement were examined. Magnetic resonance imaging volumetric measurements and echocardiography-based three-dimensional reconstruction were performed. End-diastolic volume, end-systolic volume, and ejection fraction were measured, and the results were compared. Magnetic resonance imaging measurements gave coefficient of variation in the intraobserver study of 3.5, 4.6, and 5.3 and in the interobserver study of 3.6, 5.9, and 6.7 for end-diastolic volume, end-systolic volume, and ejection fraction, respectively. Echocardiographic three-dimensional reconstruction was highly feasible (97%). In the intraobserver study, the corresponding values were 6.0, 7.0, and 8.9 and in the interobserver study 7.4, 10.8, and 13.4. In comparison of the methods, correlations with magnetic resonance imaging were r = 0.91, 0.91, and 0.38, and the corresponding coefficient of variations were 9.4, 10.8, and 14.7. Echocardiography derived volumes (mL/m(2)) were significantly higher than magnetic resonance imaging volumes in end-diastolic volume 13.7 ± 25.6 and in end-systolic volume 9.1 ± 17.0 (both P < .05). The knowledge-based three-dimensional right ventricular volume method was highly feasible. Intra and interobserver variabilities were satisfactory. Agreement with magnetic resonance imaging measurements for volumes was reasonable but unsatisfactory for ejection fraction. Knowledge-based reconstruction may replace magnetic resonance imaging measurements for serial follow-up, whereas magnetic resonance imaging should be used for surgical decision making.
Prediction of a Densely Loaded Particle-Laden Jet using a Euler-Lagrange Dense Spray Model
NASA Astrophysics Data System (ADS)
Pakseresht, Pedram; Apte, Sourabh V.
2017-11-01
Modeling of a dense spray regime using an Euler-Lagrange discrete-element approach is challenging because of local high volume loading. A subgrid cluster of droplets can lead to locally high void fractions for the disperse phase. Under these conditions, spatio-temporal changes in the carrier phase volume fractions, which are commonly neglected in spray simulations in an Euler-Lagrange two-way coupling model, could become important. Accounting for the carrier phase volume fraction variations, leads to zero-Mach number, variable density governing equations. Using pressure-based solvers, this gives rise to a source term in the pressure Poisson equation and a non-divergence free velocity field. To test the validity and predictive capability of such an approach, a round jet laden with solid particles is investigated using Direct Numerical Simulation and compared with available experimental data for different loadings. Various volume fractions spanning from dilute to dense regimes are investigated with and without taking into account the volume displacement effects. The predictions of the two approaches are compared and analyzed to investigate the effectiveness of the dense spray model. Financial support was provided by National Aeronautics and Space Administration (NASA).
A new fractional order derivative based active contour model for colon wall segmentation
NASA Astrophysics Data System (ADS)
Chen, Bo; Li, Lihong C.; Wang, Huafeng; Wei, Xinzhou; Huang, Shan; Chen, Wensheng; Liang, Zhengrong
2018-02-01
Segmentation of colon wall plays an important role in advancing computed tomographic colonography (CTC) toward a screening modality. Due to the low contrast of CT attenuation around colon wall, accurate segmentation of the boundary of both inner and outer wall is very challenging. In this paper, based on the geodesic active contour model, we develop a new model for colon wall segmentation. First, tagged materials in CTC images were automatically removed via a partial volume (PV) based electronic colon cleansing (ECC) strategy. We then present a new fractional order derivative based active contour model to segment the volumetric colon wall from the cleansed CTC images. In this model, the regionbased Chan-Vese model is incorporated as an energy term to the whole model so that not only edge/gradient information but also region/volume information is taken into account in the segmentation process. Furthermore, a fractional order differentiation derivative energy term is also developed in the new model to preserve the low frequency information and improve the noise immunity of the new segmentation model. The proposed colon wall segmentation approach was validated on 16 patient CTC scans. Experimental results indicate that the present scheme is very promising towards automatically segmenting colon wall, thus facilitating computer aided detection of initial colonic polyp candidates via CTC.
A manpower calculus: the implications of SUO fellowship expansion on oncologic surgeon case volumes.
See, William A
2014-01-01
Society of Urologic Oncology (SUO)-accredited fellowship programs have undergone substantial expansion. This study developed a mathematical model to estimate future changes in urologic oncologic surgeon (UOS) manpower and analyzed the effect of those changes on per-UOS case volumes. SUO fellowship program directors were queried as to the number of positions available on an annual basis. Current US UOS manpower was estimated from the SUO membership list. Future manpower was estimated on an annual basis by linear senescence of existing manpower combined with linear growth of newly trained surgeons. Case-volume estimates for the 4 surgical disease sites (prostate, kidney/renal pelvis, bladder, and testes) were obtained from the literature. The future number of major cases was determined from current volumes based upon the US population growth rates and the historic average annual change in disease incidence. Two models were used to predict future per-UOS major case volumes. Model 1 assumed the current distribution of cases between nononcologic surgeons and UOS would continue. Model 2 assumed a progressive redistribution of cases over time such that in 2043 100% of major urologic cancer cases would be performed by UOSs. Over the 30-year period to "manpower steady-state" SUO-accredited UOSs practicing in the United States have the potential to increase from approximately 600 currently to 1,650 in 2043. During this interval, case volumes are predicted to change 0.97-, 2.4-, 1.1-, and 1.5-fold for prostatectomy, nephrectomy, cystectomy, and retroperitoneal lymph node dissection, respectively. The ratio of future to current total annual case volumes is predicted to be 0.47 and 0.9 for models 1 and 2, respectively. The number of annual US practicing graduates necessary to achieve a future to current case-volume ratio greater than 1 is 25 and 49 in models 1 and 2, respectively. The current number of SUO fellowship trainees has the potential to decrease future per-UOS case volumes relative to current levels. Redistribution of existing case volume or a decrease in the annual number of trainees or both would be required to insure sufficient surgical volumes for skill maintenance and optimal patient outcomes. Published by Elsevier Inc.
Estimation of Effective Directional Strength of Single Walled Wavy CNT Reinforced Nanocomposite
NASA Astrophysics Data System (ADS)
Bhowmik, Krishnendu; Kumar, Pranav; Khutia, Niloy; Chowdhury, Amit Roy
2018-03-01
In this present work, single walled wavy carbon nanotube reinforced into composite has been studied to predict the effective directional strength of the nanocomposite. The effect of waviness on the overall Young’s modulus of the composite has been analysed using three dimensional finite element model. Waviness pattern of carbon nanotube is considered as periodic cosine function. Both long (continuous) and short (discontinuous) carbon nanotubes are being idealized as solid annular tube. Short carbon nanotube is modelled with hemispherical cap at its both ends. Representative Volume Element models have been developed with different waviness, height fractions, volume fractions and modulus ratios of carbon nanotubes. Consequently a micromechanics based analytical model has been formulated to derive the effective reinforcing modulus of wavy carbon nanotubes. In these models wavy single walled wavy carbon nanotubes are considered to be aligned along the longitudinal axis of the Representative Volume Element model. Results obtained from finite element analyses are compared with analytical model and they are found in good agreement.
Fast multiview three-dimensional reconstruction method using cost volume filtering
NASA Astrophysics Data System (ADS)
Lee, Seung Joo; Park, Min Ki; Jang, In Yeop; Lee, Kwan H.
2014-03-01
As the number of customers who want to record three-dimensional (3-D) information using a mobile electronic device increases, it becomes more and more important to develop a method which quickly reconstructs a 3-D model from multiview images. A fast multiview-based 3-D reconstruction method is presented, which is suitable for the mobile environment by constructing a cost volume of the 3-D height field. This method consists of two steps: the construction of a reliable base surface and the recovery of shape details. In each step, the cost volume is constructed using photoconsistency and then it is filtered according to the multiscale. The multiscale-based cost volume filtering allows the 3-D reconstruction to maintain the overall shape and to preserve the shape details. We demonstrate the strength of the proposed method in terms of computation time, accuracy, and unconstrained acquisition environment.
A simplified GIS-based model for large wood recruitment and connectivity in mountain basins
NASA Astrophysics Data System (ADS)
Lucía, Ana; Antonello, Andrea; Campana, Daniela; Cavalli, Marco; Crema, Stefano; Franceschi, Silvia; Marchese, Enrico; Niedrist, Martin; Schneiderbauer, Stefan; Comiti, Francesco
2014-05-01
The mobilization of large wood (LW) elements in mountain rivers channels during floods may increase their hazard potential, especially by clogging narrow sections such as bridges. However, the prediction of LW transport magnitude during flood events is a challenging topic. Although some models on LW transport have been recently developed, the objective of this work was to generate a simplified GIS-based model to identify along the channel network the most likely LW-related critical sections during high-magnitude flood events in forested mountain basins. Potential LW contribution generated by landsliding occurring on hillslopes is assessed using SHALSTAB stability model coupled to a GIS-based connectivity index, developed as a modification of the index proposed by Cavalli et al (2013). Connected slope-derived LW volumes are then summed at each raster cell to LW volumes generated by bank erosion along the erodibile part of river corridors, where bank erosion processes are estimated based on user-defined channel widening ratios stemming from observations following recent extreme events in mountain basins. LW volume in the channel is then routed through the stream network applying simple Boolean rules meant to capture the most important limiting transport condition in these high-energy systems at flood stage, i.e. flow width relative to log length. In addition, the role of bridges and retention check-dams in blocking floating logs is accounted for in the model, in particular bridge length and height are used to characterize their clogging susceptibility for different levels of expected LW volumes and size. The model has been tested in the Rienz and Ahr basins (about 630 km2 each), located in the Eastern Italian Alps. Sixty percent of the basin area is forested, and elevations range from 811 m a.s.l. to 3488 m a.s.l.. We used a 2.5 m resolution DTM and DSM, and their difference was used to calculate the canopy height. Data from 35 plots of the National Forest Inventory were used to estimate forest stand volume by a semi-empirical model. Ddatabase on shallow landslides along with precipitation depth was utilized to calibrate the parameters for the SHALSTAB model. Orthophotos (0.5 m pixel resolution) and existing technical maps were used to delimitate the channel banks, which were used to calculate automatically channel width for each grid cell. The model output provided information about the expected volume and mean size of LW recruited and transported during a 300 yr flood event in the test basins, as well as the location of the most probable clogged sections (mostly related to infrastructures) along the channel network. The model thus shows the capability to assist river managers in identifying the most critical sections of river networks and to assess the effectiveness and location of different mitigation options such as wood retention structures or forest management practices.
NASA Astrophysics Data System (ADS)
Ülker, Erkan; Turanboy, Alparslan
2009-07-01
The block stone industry is one of the main commercial use of rock. The economic potential of any block quarry depends on the recovery rate, which is defined as the total volume of useful rough blocks extractable from a fixed rock volume in relation to the total volume of moved material. The natural fracture system, the rock type(s) and the extraction method used directly influence the recovery rate. The major aims of this study are to establish a theoretical framework for optimising the extraction process in marble quarries for a given fracture system, and for predicting the recovery rate of the excavated blocks. We have developed a new approach by taking into consideration only the fracture structure for maximum block recovery in block quarries. The complete model uses a linear approach based on basic geometric features of discontinuities for 3D models, a tree structure (TS) for individual investigation and finally a genetic algorithm (GA) for the obtained cuboid volume(s). We tested our new model in a selected marble quarry in the town of İscehisar (AFYONKARAHİSAR—TURKEY).
NASA Astrophysics Data System (ADS)
Tai, Y.; Watanabe, T.; Nagata, K.
2018-03-01
A mixing volume model (MVM) originally proposed for molecular diffusion in incompressible flows is extended as a model for molecular diffusion and thermal conduction in compressible turbulence. The model, established for implementation in Lagrangian simulations, is based on the interactions among spatially distributed notional particles within a finite volume. The MVM is tested with the direct numerical simulation of compressible planar jets with the jet Mach number ranging from 0.6 to 2.6. The MVM well predicts molecular diffusion and thermal conduction for a wide range of the size of mixing volume and the number of mixing particles. In the transitional region of the jet, where the scalar field exhibits a sharp jump at the edge of the shear layer, a smaller mixing volume is required for an accurate prediction of mean effects of molecular diffusion. The mixing time scale in the model is defined as the time scale of diffusive effects at a length scale of the mixing volume. The mixing time scale is well correlated for passive scalar and temperature. Probability density functions of the mixing time scale are similar for molecular diffusion and thermal conduction when the mixing volume is larger than a dissipative scale because the mixing time scale at small scales is easily affected by different distributions of intermittent small-scale structures between passive scalar and temperature. The MVM with an assumption of equal mixing time scales for molecular diffusion and thermal conduction is useful in the modeling of the thermal conduction when the modeling of the dissipation rate of temperature fluctuations is difficult.
NASA Astrophysics Data System (ADS)
Xu, Xiaojiang; Rioux, Timothy P.; MacLeod, Tynan; Patel, Tejash; Rome, Maxwell N.; Potter, Adam W.
2017-03-01
The purpose of this paper is to develop a database of tissue composition, distribution, volume, surface area, and skin thickness from anatomically correct human models, the virtual family. These models were based on high-resolution magnetic resonance imaging (MRI) of human volunteers, including two adults (male and female) and two children (boy and girl). In the segmented image dataset, each voxel is associated with a label which refers to a tissue type that occupies up that specific cubic millimeter of the body. The tissue volume was calculated from the number of the voxels with the same label. Volumes of 24 organs in body and volumes of 7 tissues in 10 specific body regions were calculated. Surface area was calculated from the collection of voxels that are touching the exterior air. Skin thicknesses were estimated from its volume and surface area. The differences between the calculated and original masses were about 3 % or less for tissues or organs that are important to thermoregulatory modeling, e.g., muscle, skin, and fat. This accurate database of body tissue distributions and geometry is essential for the development of human thermoregulatory models. Data derived from medical imaging provide new effective tools to enhance thermal physiology research and gain deeper insight into the mechanisms of how the human body maintains heat balance.
Early-age hydration and volume change of calcium sulfoaluminate cement-based binders
NASA Astrophysics Data System (ADS)
Chaunsali, Piyush
Shrinkage cracking is a predominant deterioration mechanism in structures with high surface-to-volume ratio. One way to allay shrinkage-induced stresses is to use calcium sulfoaluminate (CSA) cement whose early-age expansion in restrained condition induces compressive stress that can be utilized to counter the tensile stresses due to shrinkage. In addition to enhancing the resistance against shrinkage cracking, CSA cement also has lower carbon footprint than that of Portland cement. This dissertation aims at improving the understanding of early-age volume change of CSA cement-based binders. For the first time, interaction between mineral admixtures (Class F fly ash, Class C fly ash, and silica fume) and OPC-CSA binder was studied. Various physico-chemical factors such as the hydration of ye'elimite (main component in CSA cement), amount of ettringite (the main phase responsible for expansion in CSA cement), supersaturation with respect to ettringite in cement pore solution, total pore volume, and material stiffness were monitored to examine early-age expansion characteristics. This research validated the crystallization stress theory by showing the presence of higher supersaturation level of ettringite, and therefore, higher crystallization stress in CSA cement-based binders. Supersaturation with respect to ettringite was found to increase with CSA dosage and external supply of gypsum. Mineral admixtures (MA) altered the expansion characteristics in OPC-CSA-MA binders with fixed CSA cement. This study reports that fly ash (FA) behaves differently depending on its phase composition. The Class C FA-based binder (OPC-CSA-CFA) ceased expanding beyond two days unlike other OPC-CSA-MA binders. Three factors were found to govern expansion of CSA cement-based binders: 1) volume fraction of ettringite in given pore volume, 2) saturation level of ettringite, and 3) dynamic modulus. Various models were utilized to estimate the macroscopic tensile stress in CSA cement-based binders without taking into account the viscoelastic effects. For the first time, model based on poromechanics was used to calculate the macroscopic tensile stress that develops in CSA cement-based binders due to crystallization of ettringite. The models enabled a reasonable prediction of tensile stress due to crystallization of ettringite including the failure of an OPC-CSA binder which had high CSA cement content. Elastic strain based on crystallization stress was calculated and compared with the observed strain. A mismatch between observed and calculated elastic strain indicated the presence of early-age creep. Lastly, the application of CSA cement in concretes is discussed to link the paste and concrete behavior.
NASA Astrophysics Data System (ADS)
Kim, Kwang Hyeon; Lee, Suk; Shim, Jang Bo; Chang, Kyung Hwan; Yang, Dae Sik; Yoon, Won Sup; Park, Young Je; Kim, Chul Yong; Cao, Yuan Jie
2017-08-01
The aim of this study is an integrated research for text-based data mining and toxicity prediction modeling system for clinical decision support system based on big data in radiation oncology as a preliminary research. The structured and unstructured data were prepared by treatment plans and the unstructured data were extracted by dose-volume data image pattern recognition of prostate cancer for research articles crawling through the internet. We modeled an artificial neural network to build a predictor model system for toxicity prediction of organs at risk. We used a text-based data mining approach to build the artificial neural network model for bladder and rectum complication predictions. The pattern recognition method was used to mine the unstructured toxicity data for dose-volume at the detection accuracy of 97.9%. The confusion matrix and training model of the neural network were achieved with 50 modeled plans (n = 50) for validation. The toxicity level was analyzed and the risk factors for 25% bladder, 50% bladder, 20% rectum, and 50% rectum were calculated by the artificial neural network algorithm. As a result, 32 plans could cause complication but 18 plans were designed as non-complication among 50 modeled plans. We integrated data mining and a toxicity modeling method for toxicity prediction using prostate cancer cases. It is shown that a preprocessing analysis using text-based data mining and prediction modeling can be expanded to personalized patient treatment decision support based on big data.
An image-based model of brain volume biomarker changes in Huntington's disease.
Wijeratne, Peter A; Young, Alexandra L; Oxtoby, Neil P; Marinescu, Razvan V; Firth, Nicholas C; Johnson, Eileanoir B; Mohan, Amrita; Sampaio, Cristina; Scahill, Rachael I; Tabrizi, Sarah J; Alexander, Daniel C
2018-05-01
Determining the sequence in which Huntington's disease biomarkers become abnormal can provide important insights into the disease progression and a quantitative tool for patient stratification. Here, we construct and present a uniquely fine-grained model of temporal progression of Huntington's disease from premanifest through to manifest stages. We employ a probabilistic event-based model to determine the sequence of appearance of atrophy in brain volumes, learned from structural MRI in the Track-HD study, as well as to estimate the uncertainty in the ordering. We use longitudinal and phenotypic data to demonstrate the utility of the patient staging system that the resulting model provides. The model recovers the following order of detectable changes in brain region volumes: putamen, caudate, pallidum, insula white matter, nonventricular cerebrospinal fluid, amygdala, optic chiasm, third ventricle, posterior insula, and basal forebrain. This ordering is mostly preserved even under cross-validation of the uncertainty in the event sequence. Longitudinal analysis performed using 6 years of follow-up data from baseline confirms efficacy of the model, as subjects consistently move to later stages with time, and significant correlations are observed between the estimated stages and nonimaging phenotypic markers. We used a data-driven method to provide new insight into Huntington's disease progression as well as new power to stage and predict conversion. Our results highlight the potential of disease progression models, such as the event-based model, to provide new insight into Huntington's disease progression and to support fine-grained patient stratification for future precision medicine in Huntington's disease.
Uppal, Shitanshu; Chapman, Christina; Spencer, Ryan J; Jolly, Shruti; Maturen, Kate; Rauh-Hain, J Alejandro; delCarmen, Marcela G; Rice, Laurel W
2017-02-01
To evaluate racial-ethnic disparities in guideline-based care in locally advanced cervical cancer and their relationship to hospital case volume. Using the National Cancer Database, we performed a retrospective cohort study of women diagnosed between 2004 and 2012 with locally advanced squamous or adenocarcinoma of the cervix undergoing definitive primary radiation therapy. The primary outcome was the race-ethnicity-based rates of adherence to the National Comprehensive Cancer Network guideline-based care. The secondary outcome was the effect of guideline-based care on overall survival. Multivariable models and propensity matching were used to compare the hospital risk-adjusted rates of guideline-based adherence and overall survival based on hospital case volume. The final cohort consisted of 16,195 patients. The rate of guideline-based care was 58.4% (95% confidence interval [CI] 57.4-59.4%) for non-Hispanic white, 53% (95% CI 51.4-54.9%) for non-Hispanic black, and 51.5% (95% CI 49.4-53.7%) for Hispanic women (P<.001). From 2004 to 2012, the rate of guideline-based care increased from 49.5% (95% CI 47.1-51.9%) to 59.1% (95% CI 56.9-61.2%) (Ptrend<.001). Based on a propensity score-matched analysis, patients receiving guideline-based care had a lower risk of mortality (adjusted hazard ratio 0.65, 95% CI 0.62-0.68). Compared with low-volume hospitals, the increase in adherence to guideline-based care in high-volume hospitals was 48-63% for non-Hispanic white, 47-53% for non-Hispanic black, and 41-54% for Hispanic women. Racial and ethnic disparities in the delivery of guideline-based care are the highest in high-volume hospitals. Guideline-based care in locally advanced cervical cancer is associated with improved survival.
Stemflow estimation in a redwood forest using model-based stratified random sampling
Jack Lewis
2003-01-01
Model-based stratified sampling is illustrated by a case study of stemflow volume in a redwood forest. The approach is actually a model-assisted sampling design in which auxiliary information (tree diameter) is utilized in the design of stratum boundaries to optimize the efficiency of a regression or ratio estimator. The auxiliary information is utilized in both the...
Cost optimization in low volume VLSI circuits
NASA Technical Reports Server (NTRS)
Cook, K. B., Jr.; Kerns, D. V., Jr.
1982-01-01
The relationship of integrated circuit (IC) cost to electronic system cost is developed using models for integrated circuit cost which are based on design/fabrication approach. Emphasis is on understanding the relationship between cost and volume for custom circuits suitable for NASA applications. In this report, reliability is a major consideration in the models developed. Results are given for several typical IC designs using off the shelf, full custom, and semicustom IC's with single and double level metallization.
Sakata, Dousatsu; Kyriakou, Ioanna; Okada, Shogo; Tran, Hoang N; Lampe, Nathanael; Guatelli, Susanna; Bordage, Marie-Claude; Ivanchenko, Vladimir; Murakami, Koichi; Sasaki, Takashi; Emfietzoglou, Dimitris; Incerti, Sebastien
2018-05-01
Gold nanoparticles (GNPs) are known to enhance the absorbed dose in their vicinity following photon-based irradiation. To investigate the therapeutic effectiveness of GNPs, previous Monte Carlo simulation studies have explored GNP dose enhancement using mostly condensed-history models. However, in general, such models are suitable for macroscopic volumes and for electron energies above a few hundred electron volts. We have recently developed, for the Geant4-DNA extension of the Geant4 Monte Carlo simulation toolkit, discrete physics models for electron transport in gold which include the description of the full atomic de-excitation cascade. These models allow event-by-event simulation of electron tracks in gold down to 10 eV. The present work describes how such specialized physics models impact simulation-based studies on GNP-radioenhancement in a context of x-ray radiotherapy. The new discrete physics models are compared to the Geant4 Penelope and Livermore condensed-history models, which are being widely used for simulation-based NP radioenhancement studies. An ad hoc Geant4 simulation application has been developed to calculate the absorbed dose in liquid water around a GNP and its radioenhancement, caused by secondary particles emitted from the GNP itself, when irradiated with a monoenergetic electron beam. The effect of the new physics models is also quantified in the calculation of secondary particle spectra, when originating in the GNP and when exiting from it. The new physics models show similar backscattering coefficients with the existing Geant4 Livermore and Penelope models in large volumes for 100 keV incident electrons. However, in submicron sized volumes, only the discrete models describe the high backscattering that should still be present around GNPs at these length scales. Sizeable differences (mostly above a factor of 2) are also found in the radial distribution of absorbed dose and secondary particles between the new and the existing Geant4 models. The degree to which these differences are due to intrinsic limitations of the condensed-history models or to differences in the underling scattering cross sections requires further investigation. Improved physics models for gold are necessary to better model the impact of GNPs in radiotherapy via Monte Carlo simulations. We implemented discrete electron transport models for gold in Geant4 that is applicable down to 10 eV including the modeling of the full de-excitation cascade. It is demonstrated that the new model has a significant positive impact on particle transport simulations in gold volumes with submicron dimensions compared to the existing Livermore and Penelope condensed-history models of Geant4. © 2018 American Association of Physicists in Medicine.
Kim, Jae-Hyun; Park, Eun-Cheol; Lee, Sang Gyu; Lee, Tae-Hyun; Jang, Sung-In
2016-03-01
We examined whether the level of hospital-based healthcare technology was related to the 30-day postoperative mortality rates, after adjusting for hospital volume, of ischemic stroke patients who underwent a cerebrovascular surgical procedure. Using the National Health Insurance Service-Cohort Sample Database, we reviewed records from 2002 to 2013 for data on patients with ischemic stroke who underwent cerebrovascular surgical procedures. Statistical analysis was performed using Cox proportional hazard models to test our hypothesis. A total of 798 subjects were included in our study. After adjusting for hospital volume of cerebrovascular surgical procedures as well as all for other potential confounders, the hazard ratio (HR) of 30-day mortality in low healthcare technology hospitals as compared to high healthcare technology hospitals was 2.583 (P < 0.001). We also found that, although the HR of 30-day mortality in low healthcare technology hospitals with high volume as compared to high healthcare technology hospitals with high volume was the highest (10.014, P < 0.0001), cerebrovascular surgical procedure patients treated in low healthcare technology hospitals had the highest 30-day mortality rate, irrespective of hospital volume. Although results of our study provide scientific evidence for a hospital volume/30-day mortality rate relationship in ischemic stroke patients who underwent cerebrovascular surgical procedures, our results also suggest that the level of hospital-based healthcare technology is associated with mortality rates independent of hospital volume. Given these results, further research into what components of hospital-based healthcare technology significantly impact mortality is warranted.
Volume effects of late term normal tissue toxicity in prostate cancer radiotherapy
NASA Astrophysics Data System (ADS)
Bonta, Dacian Viorel
Modeling of volume effects for treatment toxicity is paramount for optimization of radiation therapy. This thesis proposes a new model for calculating volume effects in gastro-intestinal and genito-urinary normal tissue complication probability (NTCP) following radiation therapy for prostate carcinoma. The radiobiological and the pathological basis for this model and its relationship to other models are detailed. A review of the radiobiological experiments and published clinical data identified salient features and specific properties a biologically adequate model has to conform to. The new model was fit to a set of actual clinical data. In order to verify the goodness of fit, two established NTCP models and a non-NTCP measure for complication risk were fitted to the same clinical data. The method of fit for the model parameters was maximum likelihood estimation. Within the framework of the maximum likelihood approach I estimated the parameter uncertainties for each complication prediction model. The quality-of-fit was determined using the Aikaike Information Criterion. Based on the model that provided the best fit, I identified the volume effects for both types of toxicities. Computer-based bootstrap resampling of the original dataset was used to estimate the bias and variance for the fitted parameter values. Computer simulation was also used to estimate the population size that generates a specific uncertainty level (3%) in the value of predicted complication probability. The same method was used to estimate the size of the patient population needed for accurate choice of the model underlying the NTCP. The results indicate that, depending on the number of parameters of a specific NTCP model, 100 (for two parameter models) and 500 patients (for three parameter models) are needed for accurate parameter fit. Correlation of complication occurrence in patients was also investigated. The results suggest that complication outcomes are correlated in a patient, although the correlation coefficient is rather small.
Jain, Rajat; Omar, Mohamed; Chaparala, Hemant; Kahn, Adam; Li, Jianbo; Kahn, Leonard; Sivalingam, Sri
2018-04-23
To compare the accuracy and reliability of stone volume estimated by ellipsoid formula (EFv) and CT-based algorithm (CTv) to true volume (TV) by water displacement in an in vitro model. Ninety stone phantoms were created using clay (0.5-40 cm 3 , 814 HU ±91) and scanned with CT. For each stone, TV was measured by water displacement, CTv was calculated by the region-growing algorithm in the CT-based software AGFA IMPAX Volume Viewer, and EFv was calculated by the standard formula π × L × W × H × 0.167. All measurements were repeated thrice, and concordance correlation coefficient (CCC) was calculated for the whole group, as well as subgroups based on volume (<1.5 cm 3 , 1.5-6 cm 3 , and >6 cm 3 ). Mean TV, CTv, and EFv were 6.42 cm 3 ± 6.57 (range: 0.5-39.37 cm 3 ), 6.24 cm 3 ± 6.15 (0.48-36.1 cm 3 ), and 8.98 cm 3 ± 9.96 (0.49-47.05 cm 3 ), respectively. When comparing TV to CTv, CCC was 0.99 (95% confidence interval [CI]: 0.99-0.995), indicating excellent agreement, although TV was slightly underestimated at larger volumes. When comparing TV to EFv, CCC was 0.82 (95% CI: 0.78-0.86), indicating poor agreement. EFv tended to overestimate the TV, especially as stone volume increased beyond 1.5 cm 3 , and there was a significant spread between trials. An automated CT-based algorithm more accurately and reliably estimates stone volume than does the ellipsoid formula. While further research is necessary to validate stone volume as a surrogate for stone burden, CT-based algorithmic volume measurement of urinary stones is a promising technology.
NASA Astrophysics Data System (ADS)
Hao, Huadong; Shi, Haolei; Yi, Pengju; Liu, Ying; Li, Cunjun; Li, Shuguang
2018-01-01
A Volume Metrology method based on Internal Electro-optical Distance-ranging method is established for large vertical energy storage tank. After analyzing the vertical tank volume calculation mathematical model, the key processing algorithms, such as gross error elimination, filtering, streamline, and radius calculation are studied for the point cloud data. The corresponding volume values are automatically calculated in the different liquids by calculating the cross-sectional area along the horizontal direction and integrating from vertical direction. To design the comparison system, a vertical tank which the nominal capacity is 20,000 m3 is selected as the research object, and there are shown that the method has good repeatability and reproducibility. Through using the conventional capacity measurement method as reference, the relative deviation of calculated volume is less than 0.1%, meeting the measurement requirements. And the feasibility and effectiveness are demonstrated.
NASA Astrophysics Data System (ADS)
Indahlastari, Aprinda; Chauhan, Munish; Schwartz, Benjamin; Sadleir, Rosalind J.
2016-12-01
Objective. In this study, we determined efficient head model sizes relative to predicted current densities in transcranial direct current stimulation (tDCS). Approach. Efficiency measures were defined based on a finite element (FE) simulations performed using nine human head models derived from a single MRI data set, having extents varying from 60%-100% of the original axial range. Eleven tissue types, including anisotropic white matter, and three electrode montages (T7-T8, F3-right supraorbital, Cz-Oz) were used in the models. Main results. Reducing head volume extent from 100% to 60%, that is, varying the model’s axial range from between the apex and C3 vertebra to one encompassing only apex to the superior cerebellum, was found to decrease the total modeling time by up to half. Differences between current density predictions in each model were quantified by using a relative difference measure (RDM). Our simulation results showed that {RDM} was the least affected (a maximum of 10% error) for head volumes modeled from the apex to the base of the skull (60%-75% volume). Significance. This finding suggested that the bone could act as a bioelectricity boundary and thus performing FE simulations of tDCS on the human head with models extending beyond the inferior skull may not be necessary in most cases to obtain reasonable precision in current density results.
Hand volume estimates based on a geometric algorithm in comparison to water displacement.
Mayrovitz, H N; Sims, N; Hill, C J; Hernandez, T; Greenshner, A; Diep, H
2006-06-01
Assessing changes in upper extremity limb volume during lymphedema therapy is important for determining treatment efficacy and documenting outcomes. Although arm volumes may be determined by tape measure, the suitability of circumference measurements to estimate hand volumes is questionable because of the deviation in circularity of hand shape. Our aim was to develop an alternative measurement procedure and algorithm for routine use to estimate hand volumes. A caliper was used to measure hand width and depth in 33 subjects (66 hands) and volumes (VE) were calculated using an elliptical frustum model. Using regression analysis and limits of agreement (LOA), VE was compared to volumes determined by water displacement (VW), to volumes calculated from tape-measure determined circumferences (VC), and to a trapezoidal model (VT). VW and VE (mean +/- SD) were similar (363 +/- 98 vs. 362 +/-100 ml) and highly correlated; VE = 1.01VW -3.1 ml, r=0.986, p<0.001, with LOA of +/- 33.5 ml and +/- 9.9 %. In contrast, VC (480 +/- 138 ml) and VT (432 +/- 122 ml) significantly overestimated volume (p<0.0001). These results indicate that the elliptical algorithm can be a useful alternative to water displacement when hand volumes are needed and the water displacement method is contra-indicated, impractical to implement, too time consuming or not available.
SU-E-T-129: Are Knowledge-Based Planning Dose Estimates Valid for Distensible Organs?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalonde, R; Heron, D; Huq, M
2015-06-15
Purpose: Knowledge-based planning programs have become available to assist treatment planning in radiation therapy. Such programs can be used to generate estimated DVHs and planning constraints for organs at risk (OARs), based upon a model generated from previous plans. These estimates are based upon the planning CT scan. However, for distensible OARs like the bladder and rectum, daily variations in volume may make the dose estimates invalid. The purpose of this study is to determine whether knowledge-based DVH dose estimates may be valid for distensible OARs. Methods: The Varian RapidPlan™ knowledge-based planning module was used to generate OAR dose estimatesmore » and planning objectives for 10 prostate cases previously planned with VMAT, and final plans were calculated for each. Five weekly setup CBCT scans of each patient were then downloaded and contoured (assuming no change in size and shape of the target volume), and rectum and bladder DVHs were recalculated for each scan. Dose volumes were then compared at 75, 60,and 40 Gy for the bladder and rectum between the planning scan and the CBCTs. Results: Plan doses and estimates matched well at all dose points., Volumes of the rectum and bladder varied widely between planning CT and the CBCTs, ranging from 0.46 to 2.42 for the bladder and 0.71 to 2.18 for the rectum, causing relative dose volumes to vary between planning CT and CBCT, but absolute dose volumes were more consistent. The overall ratio of CBCT/plan dose volumes was 1.02 ±0.27 for rectum and 0.98 ±0.20 for bladder in these patients. Conclusion: Knowledge-based planning dose volume estimates for distensible OARs are still valid, in absolute volume terms, between treatment planning scans and CBCT’s taken during daily treatment. Further analysis of the data is being undertaken to determine how differences depend upon rectum and bladder filling state. This work has been supported by Varian Medical Systems.« less
Huizinga, Richard J.
2014-01-01
The rainfall-runoff pairs from the storm-specific GUH analysis were further analyzed against various basin and rainfall characteristics to develop equations to estimate the peak streamflow and flood volume based on a quantity of rainfall on the basin.
Will It Float?: A Learning Cycle Investigation of Mass and Volume
ERIC Educational Resources Information Center
Vincent, Dan; Cassel, Darlinda; Milligan, Jeanie
2008-01-01
In this science investigation based on the 5E learning model, students moved through four different centers designed to focus their attention on the concepts of mass, volume, and density. At these stations, students encountered discrepant events that heightened their curiosity and encouraged discussion with peers about what they expected and…
Nowcasting Intraseasonal Recreational Fishing Harvest with Internet Search Volume
Carter, David W.; Crosson, Scott; Liese, Christopher
2015-01-01
Estimates of recreational fishing harvest are often unavailable until after a fishing season has ended. This lag in information complicates efforts to stay within the quota. The simplest way to monitor quota within the season is to use harvest information from the previous year. This works well when fishery conditions are stable, but is inaccurate when fishery conditions are changing. We develop regression-based models to “nowcast” intraseasonal recreational fishing harvest in the presence of changing fishery conditions. Our basic model accounts for seasonality, changes in the fishing season, and important events in the fishery. Our extended model uses Google Trends data on the internet search volume relevant to the fishery of interest. We demonstrate the model with the Gulf of Mexico red snapper fishery where the recreational sector has exceeded the quota nearly every year since 2007. Our results confirm that data for the previous year works well to predict intraseasonal harvest for a year (2012) where fishery conditions are consistent with historic patterns. However, for a year (2013) of unprecedented harvest and management activity our regression model using search volume for the term “red snapper season” generates intraseasonal nowcasts that are 27% more accurate than the basic model without the internet search information and 29% more accurate than the prediction based on the previous year. Reliable nowcasts of intraseasonal harvest could make in-season (or in-year) management feasible and increase the likelihood of staying within quota. Our nowcasting approach using internet search volume might have the potential to improve quota management in other fisheries where conditions change year-to-year. PMID:26348645
Castner, Jessica; Yin, Yong; Loomis, Dianne; Hewner, Sharon
2016-07-01
The purpose of this study is to describe and explain the temporal and seasonal trends in ED utilization for a low-income population. A retrospective analysis of 66,487 ED Medicaid-insured health care claims in 2009 was conducted for 2 Western New York Counties using time-series analysis with autoregressive moving average (ARMA) models. The final ARMA (2,0) model indicated an autoregressive structure with up to a 2-day lag. ED volume is lower on weekends than on weekdays, and the highest volumes are on Mondays. Summer and fall seasons demonstrated higher volumes, whereas lower volume outliers were associated with holidays. Day of the week was an influential predictor of ED utilization in low-income persons. Season and holidays are also predictors of ED utilization. These calendar-based patterns support the need for ongoing and future emergency leaders' collaborations in community-based care system redesign to meet the health care access needs of low-income persons. Copyright © 2016 Emergency Nurses Association. Published by Elsevier Inc. All rights reserved.
Overcoming bias in estimating the volume-outcome relationship.
Tsai, Alexander C; Votruba, Mark; Bridges, John F P; Cebul, Randall D
2006-02-01
To examine the effect of hospital volume on 30-day mortality for patients with congestive heart failure (CHF) using administrative and clinical data in conventional regression and instrumental variables (IV) estimation models. The primary data consisted of longitudinal information on comorbid conditions, vital signs, clinical status, and laboratory test results for 21,555 Medicare-insured patients aged 65 years and older hospitalized for CHF in northeast Ohio in 1991-1997. The patient was the primary unit of analysis. We fit a linear probability model to the data to assess the effects of hospital volume on patient mortality within 30 days of admission. Both administrative and clinical data elements were included for risk adjustment. Linear distances between patients and hospitals were used to construct the instrument, which was then used to assess the endogeneity of hospital volume. When only administrative data elements were included in the risk adjustment model, the estimated volume-outcome effect was statistically significant (p=.029) but small in magnitude. The estimate was markedly attenuated in magnitude and statistical significance when clinical data were added to the model as risk adjusters (p=.39). IV estimation shifted the estimate in a direction consistent with selective referral, but we were unable to reject the consistency of the linear probability estimates. Use of only administrative data for volume-outcomes research may generate spurious findings. The IV analysis further suggests that conventional estimates of the volume-outcome relationship may be contaminated by selective referral effects. Taken together, our results suggest that efforts to concentrate hospital-based CHF care in high-volume hospitals may not reduce mortality among elderly patients.
Quantifying Standing Dead Tree Volume and Structural Loss with Voxelized Terrestrial Lidar Data
NASA Astrophysics Data System (ADS)
Popescu, S. C.; Putman, E.
2017-12-01
Standing dead trees (SDTs) are an important forest component and impact a variety of ecosystem processes, yet the carbon pool dynamics of SDTs are poorly constrained in terrestrial carbon cycling models. The ability to model wood decay and carbon cycling in relation to detectable changes in tree structure and volume over time would greatly improve such models. The overall objective of this study was to provide automated aboveground volume estimates of SDTs and automated procedures to detect, quantify, and characterize structural losses over time with terrestrial lidar data. The specific objectives of this study were: 1) develop an automated SDT volume estimation algorithm providing accurate volume estimates for trees scanned in dense forests; 2) develop an automated change detection methodology to accurately detect and quantify SDT structural loss between subsequent terrestrial lidar observations; and 3) characterize the structural loss rates of pine and oak SDTs in southeastern Texas. A voxel-based volume estimation algorithm, "TreeVolX", was developed and incorporates several methods designed to robustly process point clouds of varying quality levels. The algorithm operates on horizontal voxel slices by segmenting the slice into distinct branch or stem sections then applying an adaptive contour interpolation and interior filling process to create solid reconstructed tree models (RTMs). TreeVolX estimated large and small branch volume with an RMSE of 7.3% and 13.8%, respectively. A voxel-based change detection methodology was developed to accurately detect and quantify structural losses and incorporated several methods to mitigate the challenges presented by shifting tree and branch positions as SDT decay progresses. The volume and structural loss of 29 SDTs, composed of Pinus taeda and Quercus stellata, were successfully estimated using multitemporal terrestrial lidar observations over elapsed times ranging from 71 - 753 days. Pine and oak structural loss rates were characterized by estimating the amount of volumetric loss occurring in 20 equal-interval height bins of each SDT. Results showed that large pine snags exhibited more rapid structural loss in comparison to medium-sized oak snags in this study.
Deorientation of PolSAR coherency matrix for volume scattering retrieval
NASA Astrophysics Data System (ADS)
Kumar, Shashi; Garg, R. D.; Kushwaha, S. P. S.
2016-05-01
Polarimetric SAR data has proven its potential to extract scattering information for different features appearing in single resolution cell. Several decomposition modelling approaches have been developed to retrieve scattering information from PolSAR data. During scattering power decomposition based on physical scattering models it becomes very difficult to distinguish volume scattering as a result from randomly oriented vegetation from scattering nature of oblique structures which are responsible for double-bounce and volume scattering , because both are decomposed in same scattering mechanism. The polarization orientation angle (POA) of an electromagnetic wave is one of the most important character which gets changed due to scattering from geometrical structure of topographic slopes, oriented urban area and randomly oriented features like vegetation cover. The shift in POA affects the polarimetric radar signatures. So, for accurate estimation of scattering nature of feature compensation in polarization orientation shift becomes an essential procedure. The prime objective of this work was to investigate the effect of shift in POA in scattering information retrieval and to explore the effect of deorientation on regression between field-estimated aboveground biomass (AGB) and volume scattering. For this study Dudhwa National Park, U.P., India was selected as study area and fully polarimetric ALOS PALSAR data was used to retrieve scattering information from the forest area of Dudhwa National Park. Field data for DBH and tree height was collect for AGB estimation using stratified random sampling. AGB was estimated for 170 plots for different locations of the forest area. Yamaguchi four component decomposition modelling approach was utilized to retrieve surface, double-bounce, helix and volume scattering information. Shift in polarization orientation angle was estimated and deorientation of coherency matrix for compensation of POA shift was performed. Effect of deorientation on RGB color composite for the forest area can be easily seen. Overestimation of volume scattering and under estimation of double bounce scattering was recorded for PolSAR decomposition without deorientation and increase in double bounce scattering and decrease in volume scattering was noticed after deorientation. This study was mainly focused on volume scattering retrieval and its relation with field estimated AGB. Change in volume scattering after POA compensation of PolSAR data was recorded and a comparison was performed on volume scattering values for all the 170 forest plots for which field data were collected. Decrease in volume scattering after deorientation was noted for all the plots. Regression between PolSAR decomposition based volume scattering and AGB was performed. Before deorientation, coefficient determination (R2) between volume scattering and AGB was 0.225. After deorientation an improvement in coefficient of determination was found and the obtained value was 0.613. This study recommends deorientation of PolSAR data for decomposition modelling to retrieve reliable volume scattering information from forest area.
Improved estimates of partial volume coefficients from noisy brain MRI using spatial context.
Manjón, José V; Tohka, Jussi; Robles, Montserrat
2010-11-01
This paper addresses the problem of accurate voxel-level estimation of tissue proportions in the human brain magnetic resonance imaging (MRI). Due to the finite resolution of acquisition systems, MRI voxels can contain contributions from more than a single tissue type. The voxel-level estimation of this fractional content is known as partial volume coefficient estimation. In the present work, two new methods to calculate the partial volume coefficients under noisy conditions are introduced and compared with current similar methods. Concretely, a novel Markov Random Field model allowing sharp transitions between partial volume coefficients of neighbouring voxels and an advanced non-local means filtering technique are proposed to reduce the errors due to random noise in the partial volume coefficient estimation. In addition, a comparison was made to find out how the different methodologies affect the measurement of the brain tissue type volumes. Based on the obtained results, the main conclusions are that (1) both Markov Random Field modelling and non-local means filtering improved the partial volume coefficient estimation results, and (2) non-local means filtering was the better of the two strategies for partial volume coefficient estimation. Copyright 2010 Elsevier Inc. All rights reserved.
Bashir, Mubasher A; Radke, Wolfgang
2012-02-17
The retention behavior of a range of statistical poly(styrene/ethylacrylate) copolymers is investigated, in order to determine the possibility to predict retention volumes of these copolymers based on a suitable chromatographic retention model. It was found that the composition of elution in gradient chromatography of the copolymers is closely related to the eluent composition at which, in isocratic chromatography, the transition from elution in adsorption to exclusion mode occurs. For homopolymers this transition takes place at a critical eluent composition at which the molar mass dependence of elution volume vanishes. Thus, similar critical eluent compositions can be defined for statistical copolymers. The existence of a critical eluent composition is further supported by the narrower peak width, indicating that the broad molar mass distribution of the samples does not contribute to the retention volume. It is shown that the existing retention model for homopolymers allows for correct quantitative predictions of retention volumes based on only three appropriate initial experiments. The selection of these initial experiments involves a gradient run and two isocratic experiments, one at the composition of elution calculated from first gradient run and second at a slightly higher eluent strength. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
1985-01-01
Topics addressed include: assessment models; model predictions of ozone changes; ozone and temperature trends; trace gas effects on climate; kinetics and photchemical data base; spectroscopic data base (infrared to microwave); instrument intercomparisons and assessments; and monthly mean distribution of ozone and temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghomi, Pooyan Shirvani; Zinchenko, Yuriy
2014-08-15
Purpose: To compare methods to incorporate the Dose Volume Histogram (DVH) curves into the treatment planning optimization. Method: The performance of three methods, namely, the conventional Mixed Integer Programming (MIP) model, a convex moment-based constrained optimization approach, and an unconstrained convex moment-based penalty approach, is compared using anonymized data of a prostate cancer patient. Three plans we generated using the corresponding optimization models. Four Organs at Risk (OARs) and one Tumor were involved in the treatment planning. The OARs and Tumor were discretized into total of 50,221 voxels. The number of beamlets was 943. We used commercially available optimization softwaremore » Gurobi and Matlab to solve the models. Plan comparison was done by recording the model runtime followed by visual inspection of the resulting dose volume histograms. Conclusion: We demonstrate the effectiveness of the moment-based approaches to replicate the set of prescribed DVH curves. The unconstrained convex moment-based penalty approach is concluded to have the greatest potential to reduce the computational effort and holds a promise of substantial computational speed up.« less
Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Hsu, Hsian-He
2018-01-01
Purpose We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. Methods The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey’s, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Results Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey’s formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). Conclusion The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey’s formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas. PMID:29438424
Effect of Age-Related Human Lens Sutures Growth on Its Fluid Dynamics.
Wu, Ho-Ting D; Howse, Louisa A; Vaghefi, Ehsan
2017-12-01
Age-related nuclear cataract is the opacification of the clear ocular lens due to oxidative damage as we age, and is the leading cause of blindness in the world. A lack of antioxidant supply to the core of ever-growing ocular lens could contribute to the cause of this condition. In this project, a computational model was developed to study the sutural fluid inflow of the aging human lens. Three different SOLIDWORKS computational fluid dynamics models of the human lens (7 years old; 28 years old; 46 years old) were created, based on available literature data. The fluid dynamics of the lens sutures were modelled using the Stokes flow equations, combined with realistic physiological boundary conditions and embedded in COMSOL Multiphysics. The flow rate, volume, and flow rate per volume of fluid entering the aging lens were examined, and all increased over the 40 years modelled. However, while the volume of the lens grew by ∼300% and the flow rate increased by ∼400%, the flow rate per volume increased only by very moderate ∼38%. Here, sutural information from humans of 7 to 46 years of age was obtained. In this modelled age range, an increase of flow rate per volume was observed, albeit at very slow rate. We hypothesize that with even further increasing age (60+ years old), the lens volume growth would outpace its flow rate increases, which would eventually lead to malnutrition of the lens nucleus and onset of cataracts.
Airport and Airway Costs: Allocation and Recovery in the 1980’s.
1987-02-01
1997 [8]. 3*X S.% Volume 4, FAA Cost Recovery Options [9). Volume 5, Econometric Cost Functions for FAA Cost Allocation Model [10]. Volume 6, Users...and relative price elasticities ( Ramsey pricing technique). User fees based on the Ramsey pricing tend to be less burdensome on users and minimize...full discussion of the Ramsey pricing techniques is provided in Allocation of Federal Airport and Airway Costs for FY 1985 [6]. -12- In step 5
TRANPLAN and GIS support for agencies in Alabama
DOT National Transportation Integrated Search
2001-08-06
Travel demand models are computerized programs intended to forecast future roadway traffic volumes for a community based on selected socioeconomic variables and travel behavior algorithms. Software to operate these travel demand models is currently a...
Pastore Carbone, Maria Giovanna; Musto, Pellegrino; Pannico, Marianna; Braeuer, Andreas; Scherillo, Giuseppe; Mensitieri, Giuseppe; Di Maio, Ernesto
2016-09-01
In the present study, a Raman line-imaging setup was employed to monitor in situ the CO2 sorption at elevated pressures (from 0.62 to 7.10 MPa) in molten PCL. The method allowed the quantitative measurement of gas concentration in both the time-resolved and the space-resolved modes. The combined experimental and theoretical approach allowed a molecular level characterization of the system. The dissolved CO2 was found to occupy a volume essentially coincident with its van der Waals volume and the estimated partial molar volume of the probe did not change with pressure. Lewis acid-Lewis base interactions with the PCL carbonyls was confirmed to be the main interaction mechanism. The geometry of the supramolecular complex and the preferential interaction site were controlled more by steric than electronic effects. On the basis of the indications emerging from Raman spectroscopy, an equation of state thermodynamic model for the PCL-CO2 system, based upon a compressible lattice fluid theory endowed with specific interactions, has been tailored to account for the interaction types detected spectroscopically. The predictions of the thermodynamic model in terms of molar volume of solution have been compared with available volumetric measurements while predictions for CO2 partial molar volume have been compared with the values estimated on the basis of Raman spectroscopy.
Automatic liver segmentation from abdominal CT volumes using graph cuts and border marching.
Liao, Miao; Zhao, Yu-Qian; Liu, Xi-Yao; Zeng, Ye-Zhan; Zou, Bei-Ji; Wang, Xiao-Fang; Shih, Frank Y
2017-05-01
Identifying liver regions from abdominal computed tomography (CT) volumes is an important task for computer-aided liver disease diagnosis and surgical planning. This paper presents a fully automatic method for liver segmentation from CT volumes based on graph cuts and border marching. An initial slice is segmented by density peak clustering. Based on pixel- and patch-wise features, an intensity model and a PCA-based regional appearance model are developed to enhance the contrast between liver and background. Then, these models as well as the location constraint estimated iteratively are integrated into graph cuts in order to segment the liver in each slice automatically. Finally, a vessel compensation method based on the border marching is used to increase the segmentation accuracy. Experiments are conducted on a clinical data set we created and also on the MICCAI2007 Grand Challenge liver data. The results show that the proposed intensity, appearance models, and the location constraint are significantly effective for liver recognition, and the undersegmented vessels can be compensated by the border marching based method. The segmentation performances in terms of VOE, RVD, ASD, RMSD, and MSD as well as the average running time achieved by our method on the SLIVER07 public database are 5.8 ± 3.2%, -0.1 ± 4.1%, 1.0 ± 0.5mm, 2.0 ± 1.2mm, 21.2 ± 9.3mm, and 4.7 minutes, respectively, which are superior to those of existing methods. The proposed method does not require time-consuming training process and statistical model construction, and is capable of dealing with complicated shapes and intensity variations successfully. Copyright © 2017 Elsevier B.V. All rights reserved.
Image-based modeling of tumor shrinkage in head and neck radiation therapy1
Chao, Ming; Xie, Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing, Lei
2010-01-01
Purpose: Understanding the kinetics of tumor growth∕shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the “ground truth” with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy. PMID:20527569
A mathematical model to optimize the drain phase in gravity-based peritoneal dialysis systems.
Akonur, Alp; Lo, Ying-Cheng; Cizman, Borut
2010-01-01
Use of patient-specific drain-phase parameters has previously been suggested to improve peritoneal dialysis (PD) adequacy. Improving management of the drain period may also help to minimize intraperitoneal volume (IPV). A typical gravity-based drain profile consists of a relatively constant initial fast-flow period, followed by a transition period and a decaying slow-flow period. That profile was modeled using the equation VD(t) = (V(D0) - Q(MAX) x t) xphi + (V(D0) x e(-alphat)) x (1 - phi), where V(D)(t) is the time-dependent dialysate volume; V(D0), the dialysate volume at the start of the drain; Q(MAX), the maximum drain flow rate; alpha, the exponential drain constant; and phi, the unit step function with respect to the flow transition. We simulated the effects of the assumed patient-specific maximum drain flow (Q(MAX)) and transition volume (psi), and the peritoneal volume percentage when transition occurs,for fixed device-specific drain parameters. Average patient transport parameters were assumed during 5-exchange therapy with 10 L of PD solution. Changes in therapy performance strongly depended on the drain parameters. Comparing 400 mL/85% with 200 mL/65% (Q(MAX/psi), drain time (7.5 min vs. 13.5 min) and IPV (2769 mL vs. 2355 mL) increased when the initial drain flow was low and the transition quick. Ultrafiltration and solute clearances remained relatively similar. Such differences were augmented up to a drain time of 22 minutes and an IPV of more than 3 L when Q(MAX) was 100 mL/min. The ability to model individual drain conditions together with water and solute transport may help to prevent patient discomfort with gravity-based PD. However, it is essential to note that practical difficulties such as displaced catheters and obstructed flow paths cause variability in drain characteristics even for the same patient, limiting the clinical applicability of this model.
Salient regions detection using convolutional neural networks and color volume
NASA Astrophysics Data System (ADS)
Liu, Guang-Hai; Hou, Yingkun
2018-03-01
Convolutional neural network is an important technique in machine learning, pattern recognition and image processing. In order to reduce the computational burden and extend the classical LeNet-5 model to the field of saliency detection, we propose a simple and novel computing model based on LeNet-5 network. In the proposed model, hue, saturation and intensity are utilized to extract depth cues, and then we integrate depth cues and color volume to saliency detection following the basic structure of the feature integration theory. Experimental results show that the proposed computing model outperforms some existing state-of-the-art methods on MSRA1000 and ECSSD datasets.
Evaldi, R.D.; Moore, B.L.
1994-01-01
Linear regression models are presented for estimating storm-runoff volumes, and mean con- centrations and loads of selected constituents in storm runoff from urban watersheds of Jefferson County, Kentucky. Constituents modeled include dissolved oxygen, biochemical and chemical oxygen demand, total and suspended solids, volatile residue, nitrogen, phosphorus and phosphate, calcium, magnesium, barium, copper, iron, lead, and zinc. Model estimations are a function of drainage area, percentage of impervious area, climatological data, and land uses. Estimation models are based on runoff volumes, and concen- trations and loads of constituents in runoff measured at 6 stormwater outfalls and 25 streams in Jefferson County.
Effects of crowders on the equilibrium and kinetic properties of protein aggregation
NASA Astrophysics Data System (ADS)
Bridstrup, John; Yuan, Jian-Min
2016-08-01
The equilibrium and kinetic properties of protein aggregation systems in the presence of crowders are investigated using simple, illuminating models based on mass-action laws. Our model yields analytic results for equilibrium properties of protein aggregates, which fit experimental data of actin and ApoC-II with crowders reasonably well. When the effects of crowders on rate constants are considered, our kinetic model is in good agreement with experimental results for actin with dextran as the crowder. Furthermore, the model shows that as crowder volume fraction increases, the length distribution of fibrils becomes narrower and shifts to shorter values due to volume exclusion.
Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L
2016-07-01
Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.
Elkhalil, Hossam; Akkin, Taner; Pearce, John; Bischof, John
2012-10-01
The photoselective vaporization of prostate (PVP) green light (532 nm) laser is increasingly being used as an alternative to the transurethral resection of prostate (TURP) for treatment of benign prostatic hyperplasia (BPH) in older patients and those who are poor surgical candidates. In order to achieve the goals of increased tissue removal volume (i.e., "ablation" in the engineering sense) and reduced collateral thermal damage during the PVP green light treatment, a two dimensional computational model for laser tissue ablation based on available parameters in the literature has been developed and compared to experiments. The model is based on the control volume finite difference and the enthalpy method with a mechanistically defined energy necessary to ablate (i.e., physically remove) a volume of tissue (i.e., energy of ablation E(ab)). The model was able to capture the general trends experimentally observed in terms of ablation and coagulation areas, their ratio (therapeutic index (TI)), and the ablation rate (AR) (mm(3)/s). The model and experiment were in good agreement at a smaller working distance (WD) (distance from the tissue in mm) and a larger scanning speed (SS) (laser scan speed in mm/s). However, the model and experiment deviated somewhat with a larger WD and a smaller SS; this is most likely due to optical shielding and heat diffusion in the laser scanning direction, which are neglected in the model. This model is a useful first step in the mechanistic prediction of PVP based BPH laser tissue ablation. Future modeling efforts should focus on optical shielding, heat diffusion in the laser scanning direction (i.e., including 3D effects), convective heat losses at the tissue boundary, and the dynamic optical, thermal, and coagulation properties of BPH tissue.
Background / Question / Methods Planning for the recovery of threatened species is increasingly informed by spatially-explicit population models. However, using simulation model results to guide land management decisions can be difficult due to the volume and complexity of model...
NASA Astrophysics Data System (ADS)
Evard, Margarita E.; Volkov, Aleksandr E.; Belyaev, Fedor S.; Ignatova, Anna D.
2018-05-01
The choice of Gibbs' potential for microstructural modeling of FCC ↔ HCP martensitic transformation in FeMn-based shape memory alloys is discussed. Threefold symmetry of the HCP phase is taken into account on specifying internal variables characterizing volume fractions of martensite variants. Constraints imposed on model constants by thermodynamic equilibrium conditions are formulated.
Lizarraga, Joy S.; Ockerman, Darwin J.
2011-01-01
The U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, Fort Worth District; the City of Corpus Christi; the Guadalupe-Blanco River Authority; the San Antonio River Authority; and the San Antonio Water System, configured, calibrated, and tested a watershed model for a study area consisting of about 5,490 mi2 of the Frio River watershed in south Texas. The purpose of the model is to contribute to the understanding of watershed processes and hydrologic conditions in the lower Frio River watershed. The model simulates streamflow, evapotranspiration (ET), and groundwater recharge by using a numerical representation of physical characteristics of the landscape, and meteorological and streamflow data. Additional time-series inputs to the model include wastewater-treatment-plant discharges, surface-water withdrawals, and estimated groundwater inflow from Leona Springs. Model simulations of streamflow, ET, and groundwater recharge were done for various periods of record depending upon available measured data for input and comparison, starting as early as 1961. Because of the large size of the study area, the lower Frio River watershed was divided into 12 subwatersheds; separate Hydrological Simulation Program-FORTRAN models were developed for each subwatershed. Simulation of the overall study area involved running simulations in downstream order. Output from the model was summarized by subwatershed, point locations, reservoir reaches, and the Carrizo-Wilcox aquifer outcrop. Four long-term U.S. Geological Survey streamflow-gaging stations and two short-term streamflow-gaging stations were used for streamflow model calibration and testing with data from 1991-2008. Calibration was based on data from 2000-08, and testing was based on data from 1991-99. Choke Canyon Reservoir stage data from 1992-2008 and monthly evaporation estimates from 1999-2008 also were used for model calibration. Additionally, 2006-08 ET data from a U.S. Geological Survey meteorological station in Medina County were used for calibration. Streamflow and ET calibration were considered good or very good. For the 2000-08 calibration period, total simulated flow volume and the flow volume of the highest 10 percent of simulated daily flows were calibrated to within about 10 percent of measured volumes at six U.S. Geological Survey streamflow-gaging stations. The flow volume of the lowest 50 percent of daily flows was not simulated as accurately but represented a small percent of the total flow volume. The model-fit efficiency for the weekly mean streamflow during the calibration periods ranged from 0.60 to 0.91, and the root mean square error ranged from 16 to 271 percent of the mean flow rate. The simulated total flow volumes during the testing periods at the long-term gaging stations exceeded the measured total flow volumes by approximately 22 to 50 percent at three stations and were within 7 percent of the measured total flow volumes at one station. For the longer 1961-2008 simulation period at the long-term stations, simulated total flow volumes were within about 3 to 18 percent of measured total flow volumes. The calibrations made by using Choke Canyon reservoir volume for 1992-2008, reservoir evaporation for 1999-2008, and ET in Medina County for 2006-08, are considered very good. Model limitations include possible errors related to model conceptualization and parameter variability, lack of data to better quantify certain model inputs, and measurement errors. Uncertainty regarding the degree to which available rainfall data represent actual rainfall is potentially the most serious source of measurement error. A sensitivity analysis was performed for the Upper San Miguel subwatershed model to show the effect of changes to model parameters on the estimated mean recharge, ET, and surface runoff from that part of the Carrizo-Wilcox aquifer outcrop. Simulated recharge was most sensitive to the changes in the lower-zone ET (LZ
Design Specifications for the Advanced Instructional Design Advisor (AIDA). Volume 1
1992-01-01
research; (3) Describe the knowledge base sufficient to support the varieties of knowledge to be represented in the AIDA model ; (4) Document the...feasibility of continuing the development of the AIDA model . 2.3 Background In Phase I of the AIDA project (Task 0006), (1) the AIDA concept was defined...the AIDA Model A paper-based demonstration of the AIDA instructional design model was performed by using the model to develop a minimal application
NASA Astrophysics Data System (ADS)
Isaenkova, Margarita; Perlovich, Yuriy; Zhuk, Dmitry; Krymskaya, Olga
2017-10-01
The rolling of Zirconium tube is studied by means of the crystal plasticity viscoplastic self-consistent (VPSC) constitutive modeling. This modeling performed by a dislocation-based constitutive model and a spectral solver using open-source simulation of DAMASK kit. The multi-grain representative volume elements with periodic boundary conditions are used to predict the texture evolution and distributions of strain and stresses. Two models for randomly textured and partially rolled material are deformed to 30% reduction in tube wall thickness and 7% reduction in tube diameter. The resulting shapes of the models are shown and distributions of strain are plotted. Also, evolution of grain's shape during deformation is shown.
[Establishment of a 3D finite element model of human skull using MSCT images and mimics software].
Huang, Ping; Li, Zheng-dong; Shao, Yu; Zou, Dong-hua; Liu, Ning-guo; Li, Li; Chen, Yuan-yuan; Wan, Lei; Chen, Yi-jiu
2011-02-01
To establish a human 3D finite element skull model, and to explore its value in biomechanics analysis. The cadaveric head was scanned and then 3D skull model was created using Mimics software based on 2D CT axial images. The 3D skull model was optimized by preprocessor along with creation of the surface and volume meshes. The stress changes, after the head was struck by an object or the head hit the ground directly, were analyzed using ANSYS software. The original 3D skull model showed a large number of triangles with a poor quality and high similarity with the real head, while the optimized model showed high quality surface and volume meshes with a small number of triangles comparatively. The model could show the local and global stress changes effectively. The human 3D skull model can be established using MSCT and Mimics software and provides a good finite element model for biomechanics analysis. This model may also provide a base for the study of head stress changes following different forces.
Miller, J; Fuller, M; Vinod, S; Suchowerska, N; Holloway, L
2009-06-01
A Clinician's discrimination between radiation therapy treatment plans is traditionally a subjective process, based on experience and existing protocols. A more objective and quantitative approach to distinguish between treatment plans is to use radiobiological or dosimetric objective functions, based on radiobiological or dosimetric models. The efficacy of models is not well understood, nor is the correlation of the rank of plans resulting from the use of models compared to the traditional subjective approach. One such radiobiological model is the Normal Tissue Complication Probability (NTCP). Dosimetric models or indicators are more accepted in clinical practice. In this study, three radiobiological models, Lyman NTCP, critical volume NTCP and relative seriality NTCP, and three dosimetric models, Mean Lung Dose (MLD) and the Lung volumes irradiated at 10Gy (V10) and 20Gy (V20), were used to rank a series of treatment plans using, harm to normal (Lung) tissue as the objective criterion. None of the models considered in this study showed consistent correlation with the Radiation Oncologists plan ranking. If radiobiological or dosimetric models are to be used in objective functions for lung treatments, based on this study it is recommended that the Lyman NTCP model be used because it will provide most consistency with traditional clinician ranking.
Ponomarev, Artem L; Costes, Sylvain V; Cucinotta, Francis A
2008-11-01
We computed probabilities to have multiple double-strand breaks (DSB), which are produced in DNA on a regional scale, and not in close vicinity, in volumes matching the size of DNA damage foci, of a large chromatin loop, and in the physical volume of DNA containing the HPRT (human hypoxanthine phosphoribosyltransferase) locus. The model is based on a Monte Carlo description of DSB formation by heavy ions in the spatial context of the entire human genome contained within the cell nucleus, as well as at the gene sequence level. We showed that a finite physical volume corresponding to a visible DNA repair focus, believed to be associated with one DSB, can contain multiple DSB due to heavy ion track structure and the DNA supercoiled topography. A corrective distribution was introduced, which was a conditional probability to have excess DSB in a focus volume, given that there was already one present. The corrective distribution was calculated for 19.5 MeV/amu N ions, 3.77 MeV/amu alpha-particles, 1000 MeV/amu Fe ions, and X-rays. The corrected initial DSB yield from the experimental data on DNA repair foci was calculated. The DSB yield based on the corrective function converts the focus yield into the DSB yield, which is comparable with the DSB yield based on the earlier PFGE experiments. The distribution of DSB within the physical limits of the HPRT gene was analyzed by a similar method as well. This corrective procedure shows the applicability of the model and empowers the researcher with a tool to better analyze focus statistics. The model enables researchers to analyze the DSB yield based on focus statistics in real experimental situations that lack one-to-one focus-to-DSB correspondance.
Tumor Volume Estimation and Quasi-Continuous Administration for Most Effective Bevacizumab Therapy
Sápi, Johanna; Kovács, Levente; Drexler, Dániel András; Kocsis, Pál; Gajári, Dávid; Sápi, Zoltán
2015-01-01
Background Bevacizumab is an exogenous inhibitor which inhibits the biological activity of human VEGF. Several studies have investigated the effectiveness of bevacizumab therapy according to different cancer types but these days there is an intense debate on its utility. We have investigated different methods to find the best tumor volume estimation since it creates the possibility for precise and effective drug administration with a much lower dose than in the protocol. Materials and Methods We have examined C38 mouse colon adenocarcinoma and HT-29 human colorectal adenocarcinoma. In both cases, three groups were compared in the experiments. The first group did not receive therapy, the second group received one 200 μg bevacizumab dose for a treatment period (protocol-based therapy), and the third group received 1.1 μg bevacizumab every day (quasi-continuous therapy). Tumor volume measurement was performed by digital caliper and small animal MRI. The mathematical relationship between MRI-measured tumor volume and mass was investigated to estimate accurate tumor volume using caliper-measured data. A two-dimensional mathematical model was applied for tumor volume evaluation, and tumor- and therapy-specific constants were calculated for the three different groups. The effectiveness of bevacizumab administration was examined by statistical analysis. Results In the case of C38 adenocarcinoma, protocol-based treatment did not result in significantly smaller tumor volume compared to the no treatment group; however, there was a significant difference between untreated mice and mice who received quasi-continuous therapy (p = 0.002). In the case of HT-29 adenocarcinoma, the daily treatment with one-twelfth total dose resulted in significantly smaller tumors than the protocol-based treatment (p = 0.038). When the tumor has a symmetrical, solid closed shape (typically without treatment), volume can be evaluated accurately from caliper-measured data with the applied two-dimensional mathematical model. Conclusion Our results provide a theoretical background for a much more effective bevacizumab treatment using optimized administration. PMID:26540189
Operative Mortality After Arthroplasty for Femoral Neck Fracture and Hospital Volume.
Maceroli, Michael A; Nikkel, Lucas E; Mahmood, Bilal; Elfar, John C
2015-12-01
The purpose of the present study is to use a statewide, population-based data set to identify mortality rates at 30-day and 1-year postoperatively following total hip arthroplasty (THA) and hemiarthroplasty (HA) for displaced femoral neck fractures. The secondary aim of the study is to determine whether arthroplasty volume confers a protective effect on the mortality rate following femoral neck fracture treatment. New York's Statewide Planning and Research Cooperative System was used to identify 45 749 patients older than 60 years of age with a discharge diagnosis of femoral neck fracture undergoing THA or HA from 2000 through 2010. Comorbidities were identified using the Charlson comorbidity index. Mortality risk was modeled using Cox proportional hazards models while controlling for demographic and comorbid characteristics. High-volume THA centers were defined as those in the top quartile of arthroplasty volume, while low-volume centers were defined as the bottom quartile. Patients undergoing THA for femoral neck fracture rather than HA were younger (79 vs 83 years, P < .001), more likely to have rheumatoid disease, and less likely to have heart disease, dementia, cancer, or diabetes (all P < .05). Thirty-day mortality after HA was higher (8.4% vs 5.7%; P < .001) as was 1-year mortality (25.9% vs 17.8%; P < .001). After controlling for age, gender, ethnicity, and comorbidities, risk of mortality following THA was 21% lower (hazard ratio [HR] 0.79; P = .003) at 30 days and 22% lower (HR 0.78; P < .001) at 1 year than HA. Patients undergoing THA at high-volume arthroplasty centers had improved 1-year mortality when compared to those undergoing THA at low-volume hospitals (HR 0.55; P = .008). Based on this large, population-based study, there is no basis to assume THA carries a greater mortality risk after hip fracture than does standard HA, even when accounting for institutional volume of hip arthroplasty.
Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET.
Hatt, M; Lamare, F; Boussion, N; Turzo, A; Collet, C; Salzenstein, F; Roux, C; Jarritt, P; Carson, K; Cheze-Le Rest, C; Visvikis, D
2007-06-21
Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both simulated and acquired datasets led to similar results and conclusions as far as the performance of segmentation algorithms under evaluation is concerned.
Tumor Volume Estimation and Quasi-Continuous Administration for Most Effective Bevacizumab Therapy.
Sápi, Johanna; Kovács, Levente; Drexler, Dániel András; Kocsis, Pál; Gajári, Dávid; Sápi, Zoltán
2015-01-01
Bevacizumab is an exogenous inhibitor which inhibits the biological activity of human VEGF. Several studies have investigated the effectiveness of bevacizumab therapy according to different cancer types but these days there is an intense debate on its utility. We have investigated different methods to find the best tumor volume estimation since it creates the possibility for precise and effective drug administration with a much lower dose than in the protocol. We have examined C38 mouse colon adenocarcinoma and HT-29 human colorectal adenocarcinoma. In both cases, three groups were compared in the experiments. The first group did not receive therapy, the second group received one 200 μg bevacizumab dose for a treatment period (protocol-based therapy), and the third group received 1.1 μg bevacizumab every day (quasi-continuous therapy). Tumor volume measurement was performed by digital caliper and small animal MRI. The mathematical relationship between MRI-measured tumor volume and mass was investigated to estimate accurate tumor volume using caliper-measured data. A two-dimensional mathematical model was applied for tumor volume evaluation, and tumor- and therapy-specific constants were calculated for the three different groups. The effectiveness of bevacizumab administration was examined by statistical analysis. In the case of C38 adenocarcinoma, protocol-based treatment did not result in significantly smaller tumor volume compared to the no treatment group; however, there was a significant difference between untreated mice and mice who received quasi-continuous therapy (p = 0.002). In the case of HT-29 adenocarcinoma, the daily treatment with one-twelfth total dose resulted in significantly smaller tumors than the protocol-based treatment (p = 0.038). When the tumor has a symmetrical, solid closed shape (typically without treatment), volume can be evaluated accurately from caliper-measured data with the applied two-dimensional mathematical model. Our results provide a theoretical background for a much more effective bevacizumab treatment using optimized administration.
NASA Astrophysics Data System (ADS)
Bisdas, Sotirios; Konstantinou, George N.; Sherng Lee, Puor; Thng, Choon Hua; Wagenblast, Jens; Baghi, Mehran; San Koh, Tong
2007-10-01
The objective of this work was to evaluate the feasibility of a two-compartment distributed-parameter (DP) tracer kinetic model to generate functional images of several physiologic parameters from dynamic contrast-enhanced CT data obtained of patients with extracranial head and neck tumors and to compare the DP functional images to those obtained by deconvolution-based DCE-CT data analysis. We performed post-processing of DCE-CT studies, obtained from 15 patients with benign and malignant head and neck cancer. We introduced a DP model of the impulse residue function for a capillary-tissue exchange unit, which accounts for the processes of convective transport and capillary-tissue exchange. The calculated parametric maps represented blood flow (F), intravascular blood volume (v1), extravascular extracellular blood volume (v2), vascular transit time (t1), permeability-surface area product (PS), transfer ratios k12 and k21, and the fraction of extracted tracer (E). Based on the same regions of interest (ROI) analysis, we calculated the tumor blood flow (BF), blood volume (BV) and mean transit time (MTT) by using a modified deconvolution-based analysis taking into account the extravasation of the contrast agent for PS imaging. We compared the corresponding values by using Bland-Altman plot analysis. We outlined 73 ROIs including tumor sites, lymph nodes and normal tissue. The Bland-Altman plot analysis revealed that the two methods showed an accepted degree of agreement for blood flow, and, thus, can be used interchangeably for measuring this parameter. Slightly worse agreement was observed between v1 in the DP model and BV but even here the two tracer kinetic analyses can be used interchangeably. Under consideration of whether both techniques may be used interchangeably was the case of t1 and MTT, as well as for measurements of the PS values. The application of the proposed DP model is feasible in the clinical routine and it can be used interchangeably for measuring blood flow and vascular volume with the commercially available reference standard of the deconvolution-based approach. The lack of substantial agreement between the measurements of vascular transit time and permeability-surface area product may be attributed to the different tracer kinetic principles employed by both models and the detailed capillary tissue exchange physiological modeling of the DP technique.
Development of a hip joint model for finite volume simulations.
Cardiff, P; Karač, A; FitzPatrick, D; Ivanković, A
2014-01-01
This paper establishes a procedure for numerical analysis of a hip joint using the finite volume method. Patient-specific hip joint geometry is segmented directly from computed tomography and magnetic resonance imaging datasets and the resulting bone surfaces are processed into a form suitable for volume meshing. A high resolution continuum tetrahedral mesh has been generated, where a sandwich model approach is adopted; the bones are represented as a stiffer cortical shells surrounding more flexible cancellous cores. Cartilage is included as a uniform thickness extruded layer and the effect of layer thickness is investigated. To realistically position the bones, gait analysis has been performed giving the 3D positions of the bones for the full gait cycle. Three phases of the gait cycle are examined using a finite volume based custom structural contact solver implemented in open-source software OpenFOAM.
An efficient solid modeling system based on a hand-held 3D laser scan device
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming
2014-12-01
The hand-held 3D laser scanner sold in the market is appealing for its port and convenient to use, but price is expensive. To develop such a system based cheap devices using the same principles as the commercial systems is impossible. In this paper, a simple hand-held 3D laser scanner is developed based on a volume reconstruction method using cheap devices. Unlike convenient laser scanner to collect point cloud of an object surface, the proposed method only scan few key profile curves on the surface. Planar section curve network can be generated from these profile curves to construct a volume model of the object. The details of design are presented, and illustrated by the example of a complex shaped object.
Transitioning From Volume to Value: A Strategic Approach to Design and Implementation.
Randazzo, Geralyn; Brown, Zenobia
2016-01-01
As the health care delivery system migrates toward a model based on value rather than volume, nursing leaders play a key role in assisting in the design and implementation of new models of care to support this transition. This article provides an overview of one organization's approach to evolve in the direction of value while gaining the experience needed to scope and scale cross-continuum assets to meet this growing demand. This article outlines the development and deployment of an organizational structure, information technology integration, clinical implementation strategies, and tools and metrics utilized to evaluate the outcomes of value-based programs. Experience in Bundled Payments for Care Improvement program is highlighted. The outcomes and lessons learned are incorporated for those interested in advancing value-based endeavors in their own organizations.
Second law of thermodynamics in volume diffusion hydrodynamics in multicomponent gas mixtures
NASA Astrophysics Data System (ADS)
Dadzie, S. Kokou
2012-10-01
We presented the thermodynamic structure of a new continuum flow model for multicomponent gas mixtures. The continuum model is based on a volume diffusion concept involving specific species. It is independent of the observer's reference frame and enables a straightforward tracking of a selected species within a mixture composed of a large number of constituents. A method to derive the second law and constitutive equations accompanying the model is presented. Using the configuration of a rotating fluid we illustrated an example of non-classical flow physics predicted by new contributions in the entropy and constitutive equations.
Hurricane Forecasting with the High-resolution NASA Finite-volume General Circulation Model
NASA Technical Reports Server (NTRS)
Atlas, R.; Reale, O.; Shen, B.-W.; Lin, S.-J.; Chern, J.-D.; Putman, W.; Lee, T.; Yeh, K.-S.; Bosilovich, M.; Radakovich, J.
2004-01-01
A high-resolution finite-volume General Circulation Model (fvGCM), resulting from a development effort of more than ten years, is now being run operationally at the NASA Goddard Space Flight Center and Ames Research Center. The model is based on a finite-volume dynamical core with terrain-following Lagrangian control-volume discretization and performs efficiently on massive parallel architectures. The computational efficiency allows simulations at a resolution of a quarter of a degree, which is double the resolution currently adopted by most global models in operational weather centers. Such fine global resolution brings us closer to overcoming a fundamental barrier in global atmospheric modeling for both weather and climate, because tropical cyclones and even tropical convective clusters can be more realistically represented. In this work, preliminary results of the fvGCM are shown. Fifteen simulations of four Atlantic tropical cyclones in 2002 and 2004 are chosen because of strong and varied difficulties presented to numerical weather forecasting. It is shown that the fvGCM, run at the resolution of a quarter of a degree, can produce very good forecasts of these tropical systems, adequately resolving problems like erratic track, abrupt recurvature, intense extratropical transition, multiple landfall and reintensification, and interaction among vortices.
NASA Astrophysics Data System (ADS)
Li, Hua; Wang, Xiaogui; Yan, Guoping; Lam, K. Y.; Cheng, Sixue; Zou, Tao; Zhuo, Renxi
2005-03-01
In this paper, a novel multiphysic mathematical model is developed for simulation of swelling equilibrium of ionized temperature sensitive hydrogels with the volume phase transition, and it is termed the multi-effect-coupling thermal-stimulus (MECtherm) model. This model consists of the steady-state Nernst-Planck equation, Poisson equation and swelling equilibrium governing equation based on the Flory's mean field theory, in which two types of polymer-solvent interaction parameters, as the functions of temperature and polymer-network volume fraction, are specified with or without consideration of the hydrogen bond interaction. In order to examine the MECtherm model consisting of nonlinear partial differential equations, a meshless Hermite-Cloud method is used for numerical solution of one-dimensional swelling equilibrium of thermal-stimulus responsive hydrogels immersed in a bathing solution. The computed results are in very good agreements with experimental data for the variation of volume swelling ratio with temperature. The influences of the salt concentration and initial fixed-charge density are discussed in detail on the variations of volume swelling ratio of hydrogels, mobile ion concentrations and electric potential of both interior hydrogels and exterior bathing solution.
Classification of SD-OCT volumes for DME detection: an anomaly detection approach
NASA Astrophysics Data System (ADS)
Sankar, S.; Sidibé, D.; Cheung, Y.; Wong, T. Y.; Lamoureux, E.; Milea, D.; Meriaudeau, F.
2016-03-01
Diabetic Macular Edema (DME) is the leading cause of blindness amongst diabetic patients worldwide. It is characterized by accumulation of water molecules in the macula leading to swelling. Early detection of the disease helps prevent further loss of vision. Naturally, automated detection of DME from Optical Coherence Tomography (OCT) volumes plays a key role. To this end, a pipeline for detecting DME diseases in OCT volumes is proposed in this paper. The method is based on anomaly detection using Gaussian Mixture Model (GMM). It starts with pre-processing the B-scans by resizing, flattening, filtering and extracting features from them. Both intensity and Local Binary Pattern (LBP) features are considered. The dimensionality of the extracted features is reduced using PCA. As the last stage, a GMM is fitted with features from normal volumes. During testing, features extracted from the test volume are evaluated with the fitted model for anomaly and classification is made based on the number of B-scans detected as outliers. The proposed method is tested on two OCT datasets achieving a sensitivity and a specificity of 80% and 93% on the first dataset, and 100% and 80% on the second one. Moreover, experiments show that the proposed method achieves better classification performances than other recently published works.
NASA Astrophysics Data System (ADS)
Watanabe, Tomoaki; Nagata, Koji
2016-11-01
The mixing volume model (MVM), which is a mixing model for molecular diffusion in Lagrangian simulations of turbulent mixing problems, is proposed based on the interactions among spatially distributed particles in a finite volume. The mixing timescale in the MVM is derived by comparison between the model and the subgrid scale scalar variance equation. A-priori test of the MVM is conducted based on the direct numerical simulations of planar jets. The MVM is shown to predict well the mean effects of the molecular diffusion under various conditions. However, a predicted value of the molecular diffusion term is positively correlated to the exact value in the DNS only when the number of the mixing particles is larger than two. Furthermore, the MVM is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (ILES/LPS). The ILES/LPS with the present mixing model predicts well the decay of the scalar variance in planar jets. This work was supported by JSPS KAKENHI Nos. 25289030 and 16K18013. The numerical simulations presented in this manuscript were carried out on the high performance computing system (NEC SX-ACE) in the Japan Agency for Marine-Earth Science and Technology.
Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.
1983-05-01
As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines.more » In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.« less
Duarte-Carvajalino, Julio M.; Sapiro, Guillermo; Harel, Noam; Lenglet, Christophe
2013-01-01
Registration of diffusion-weighted magnetic resonance images (DW-MRIs) is a key step for population studies, or construction of brain atlases, among other important tasks. Given the high dimensionality of the data, registration is usually performed by relying on scalar representative images, such as the fractional anisotropy (FA) and non-diffusion-weighted (b0) images, thereby ignoring much of the directional information conveyed by DW-MR datasets itself. Alternatively, model-based registration algorithms have been proposed to exploit information on the preferred fiber orientation(s) at each voxel. Models such as the diffusion tensor or orientation distribution function (ODF) have been used for this purpose. Tensor-based registration methods rely on a model that does not completely capture the information contained in DW-MRIs, and largely depends on the accurate estimation of tensors. ODF-based approaches are more recent and computationally challenging, but also better describe complex fiber configurations thereby potentially improving the accuracy of DW-MRI registration. A new algorithm based on angular interpolation of the diffusion-weighted volumes was proposed for affine registration, and does not rely on any specific local diffusion model. In this work, we first extensively compare the performance of registration algorithms based on (i) angular interpolation, (ii) non-diffusion-weighted scalar volume (b0), and (iii) diffusion tensor image (DTI). Moreover, we generalize the concept of angular interpolation (AI) to non-linear image registration, and implement it in the FMRIB Software Library (FSL). We demonstrate that AI registration of DW-MRIs is a powerful alternative to volume and tensor-based approaches. In particular, we show that AI improves the registration accuracy in many cases over existing state-of-the-art algorithms, while providing registered raw DW-MRI data, which can be used for any subsequent analysis. PMID:23596381
Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Harel, Noam; Lenglet, Christophe
2013-01-01
Registration of diffusion-weighted magnetic resonance images (DW-MRIs) is a key step for population studies, or construction of brain atlases, among other important tasks. Given the high dimensionality of the data, registration is usually performed by relying on scalar representative images, such as the fractional anisotropy (FA) and non-diffusion-weighted (b0) images, thereby ignoring much of the directional information conveyed by DW-MR datasets itself. Alternatively, model-based registration algorithms have been proposed to exploit information on the preferred fiber orientation(s) at each voxel. Models such as the diffusion tensor or orientation distribution function (ODF) have been used for this purpose. Tensor-based registration methods rely on a model that does not completely capture the information contained in DW-MRIs, and largely depends on the accurate estimation of tensors. ODF-based approaches are more recent and computationally challenging, but also better describe complex fiber configurations thereby potentially improving the accuracy of DW-MRI registration. A new algorithm based on angular interpolation of the diffusion-weighted volumes was proposed for affine registration, and does not rely on any specific local diffusion model. In this work, we first extensively compare the performance of registration algorithms based on (i) angular interpolation, (ii) non-diffusion-weighted scalar volume (b0), and (iii) diffusion tensor image (DTI). Moreover, we generalize the concept of angular interpolation (AI) to non-linear image registration, and implement it in the FMRIB Software Library (FSL). We demonstrate that AI registration of DW-MRIs is a powerful alternative to volume and tensor-based approaches. In particular, we show that AI improves the registration accuracy in many cases over existing state-of-the-art algorithms, while providing registered raw DW-MRI data, which can be used for any subsequent analysis.
The Thermodynamic Limit in Mean Field Spin Glass Models
NASA Astrophysics Data System (ADS)
Guerra, Francesco; Toninelli, Fabio Lucio
We present a simple strategy in order to show the existence and uniqueness of the infinite volume limit of thermodynamic quantities, for a large class of mean field disordered models, as for example the Sherrington-Kirkpatrick model, and the Derrida p-spin model. The main argument is based on a smooth interpolation between a large system, made of N spin sites, and two similar but independent subsystems, made of N1 and N2 sites, respectively, with N1+N2=N. The quenched average of the free energy turns out to be subadditive with respect to the size of the system. This gives immediately convergence of the free energy per site, in the infinite volume limit. Moreover, a simple argument, based on concentration of measure, gives the almost sure convergence, with respect to the external noise. Similar results hold also for the ground state energy per site.
Finite volume model for two-dimensional shallow environmental flow
Simoes, F.J.M.
2011-01-01
This paper presents the development of a two-dimensional, depth integrated, unsteady, free-surface model based on the shallow water equations. The development was motivated by the desire of balancing computational efficiency and accuracy by selective and conjunctive use of different numerical techniques. The base framework of the discrete model uses Godunov methods on unstructured triangular grids, but the solution technique emphasizes the use of a high-resolution Riemann solver where needed, switching to a simpler and computationally more efficient upwind finite volume technique in the smooth regions of the flow. Explicit time marching is accomplished with strong stability preserving Runge-Kutta methods, with additional acceleration techniques for steady-state computations. A simplified mass-preserving algorithm is used to deal with wet/dry fronts. Application of the model is made to several benchmark cases that show the interplay of the diverse solution techniques.
Fibre inflation and α-attractors
NASA Astrophysics Data System (ADS)
Kallosh, Renata; Linde, Andrei; Roest, Diederik; Westphal, Alexander; Yamada, Yusuke
2018-02-01
Fibre inflation is a specific string theory construction based on the Large Volume Scenario that produces an inflationary plateau. We outline its relation to α-attractor models for inflation, with the cosmological sector originating from certain string theory corrections leading to α = 2 and α = 1/2. Above a certain field range, the steepening effect of higher-order corrections leads first to the breakdown of single-field slow-roll and after that to the onset of 2-field dynamics: the overall volume of the extra dimensions starts to participate in the effective dynamics. Finally, we propose effective supergravity models of fibre inflation based on an \\overline{D3} uplift term with a nilpotent superfield. Specific moduli dependent \\overline{D3} induced geometries lead to cosmological fibre models but have in addition a de Sitter minimum exit. These supergravity models motivated by fibre inflation are relatively simple, stabilize the axions and disentangle the Hubble parameter from supersymmetry breaking.
Gas hydrate volume estimations on the South Shetland continental margin, Antarctic Peninsula
Jin, Y.K.; Lee, M.W.; Kim, Y.; Nam, S.H.; Kim, K.J.
2003-01-01
Multi-channel seismic data acquired on the South Shetland margin, northern Antarctic Peninsula, show that Bottom Simulating Reflectors (BSRs) are widespread in the area, implying large volumes of gas hydrates. In order to estimate the volume of gas hydrate in the area, interval velocities were determined using a 1-D velocity inversion method and porosities were deduced from their relationship with sub-bottom depth for terrigenous sediments. Because data such as well logs are not available, we made two baseline models for the velocities and porosities of non-gas hydrate-bearing sediments in the area, considering the velocity jump observed at the shallow sub-bottom depth due to joint contributions of gas hydrate and a shallow unconformity. The difference between the results of the two models is not significant. The parameters used to estimate the total volume of gas hydrate in the study area were 145 km of total length of BSRs identified on seismic profiles, 350 m thickness and 15 km width of gas hydrate-bearing sediments, and 6.3% of the average volume gas hydrate concentration (based on the second baseline model). Assuming that gas hydrates exist only where BSRs are observed, the total volume of gas hydrates along the seismic profiles in the area is about 4.8 ?? 1010 m3 (7.7 ?? 1012 m3 volume of methane at standard temperature and pressure).
He, Guoxi; Liang, Yongtu; Li, Yansong; Wu, Mengyu; Sun, Liying; Xie, Cheng; Li, Feng
2017-06-15
The accidental leakage of long-distance pressurized oil pipelines is a major area of risk, capable of causing extensive damage to human health and environment. However, the complexity of the leaking process, with its complex boundary conditions, leads to difficulty in calculating the leakage volume. In this study, the leaking process is divided into 4 stages based on the strength of transient pressure. 3 models are established to calculate the leaking flowrate and volume. First, a negative pressure wave propagation attenuation model is applied to calculate the sizes of orifices. Second, a transient oil leaking model, consisting of continuity, momentum conservation, energy conservation and orifice flow equations, is built to calculate the leakage volume. Third, a steady-state oil leaking model is employed to calculate the leakage after valves and pumps shut down. Moreover, sensitive factors that affect the leak coefficient of orifices and volume are analyzed respectively to determine the most influential one. To validate the numerical simulation, two types of leakage test with different sizes of leakage holes were conducted from Sinopec product pipelines. More validations were carried out by applying commercial software to supplement the experimental insufficiency. Thus, the leaking process under different leaking conditions are described and analyzed. Copyright © 2017 Elsevier B.V. All rights reserved.
SU-E-J-136: Evaluation of a Non-Invasive Method on Lung Tumor Tracking.
Zhao, T; White, B; Low, D
2012-06-01
to develop a non-invasive method to track lung motion in free-breathing patients. A free-breathing breathing model has been developed to use tidal volume and air flow rate as surrogates for lung trajectories. In this study, 4D CT data sets were acquired during simulation and were reconstructed into 10 phases. Total lung capacities were calculated from the reconstructed images. Continuous signals from the abdominal pneumatic belt were correlated to the volumes and were therefore converted into a curve of tidal volumes. Air flow rate were calculated as the first order derivative of the tidal volume curve. Lung trajectories in the 10 reconstructed images were obtained using B-Spline registration. Parameters of the free-breathing lung motion model were fit from the tidal volumes, airflow rates and lung trajectories using the simulation data. Patients were rescanned every week during the treatment. Prediction of lung trajectories from the model were given and compared to the actual positions in BEV. Trajectories of lung were predicted with residual error of 1.49mm at 95th percentile of all tracked points. Tracking was stable and reproducible over two weeks. Non-invasive tumor tracking based on a free-breathing lung motion model is feasible and stable over weeks. © 2012 American Association of Physicists in Medicine.
2016-08-23
SECURITY CLASSIFICATION OF: Hybrid finite element / finite volume based CaMEL shallow water flow solvers have been successfully extended to study wave...effects on ice floes in a simplified 10 sq-km ocean domain. Our solver combines the merits of both the finite element and finite volume methods and...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 sea ice dynamics, shallow water, finite element , finite volume
Tanaka, Yutaka; Saito, Shigeru; Sasuga, Saeko; Takahashi, Azuma; Aoyama, Yusuke; Obama, Kazuto; Umezu, Mitsuo; Iwasaki, Kiyotaka
2018-05-01
Quantitative assessment of post-transcatheter aortic valve replacement (TAVR) aortic regurgitation (AR) remains challenging. We developed patient-specific anatomical models with pulsatile flow circuit and investigated factors associated with AR after TAVR. Based on pre-procedural computed tomography (CT) data of the six patients who underwent transfemoral TAVR using a 23-mm SAPIEN XT, anatomically and mechanically equivalent aortic valve models were developed. Forward flow and heart rate of each patient in two days after TAVR were duplicated under mean aortic pressure of 80mmHg. Paravalvular leakage (PVL) volume in basal and additional conditions was measured for each model using an electromagnetic flow sensor. Incompletely apposed tract between the transcatheter and aortic valves was examined using a micro-CT. PVL volume in each patient-specific model was consistent with each patient's PVL grade, and was affected by hemodynamic conditions. PVL and total regurgitation volume increased with the mean aortic pressure, whereas closing volume did not change. In contrast, closing volume increased proportionately with heart rate, but PVL did not change. The minimal cross-sectional gap had a positive correlation with the PVL volumes (r=0.89, P=0.02). The gap areas typically occurred in the vicinity of the bulky calcified nodules under the native commissure. PVL volume, which could be affected by hemodynamic conditions, was significantly associated with the minimal cross-sectional gap area between the aortic annulus and the stent frame. These data may improve our understanding of the mechanism of the occurrence of post-TAVR PVL. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
A combined surface/volume scattering retracking algorithm for ice sheet satellite altimetry
NASA Technical Reports Server (NTRS)
Davis, Curt H.
1992-01-01
An algorithm that is based upon a combined surface-volume scattering model is developed. It can be used to retrack individual altimeter waveforms over ice sheets. An iterative least-squares procedure is used to fit the combined model to the return waveforms. The retracking algorithm comprises two distinct sections. The first generates initial model parameter estimates from a filtered altimeter waveform. The second uses the initial estimates, the theoretical model, and the waveform data to generate corrected parameter estimates. This retracking algorithm can be used to assess the accuracy of elevations produced from current retracking algorithms when subsurface volume scattering is present. This is extremely important so that repeated altimeter elevation measurements can be used to accurately detect changes in the mass balance of the ice sheets. By analyzing the distribution of the model parameters over large portions of the ice sheet, regional and seasonal variations in the near-surface properties of the snowpack can be quantified.
38th JANNAF Combustion Subcommittee Meeting. Volume 1
NASA Technical Reports Server (NTRS)
Fry, Ronald S. (Editor); Eggleston, Debra S. (Editor); Gannaway, Mary T. (Editor)
2002-01-01
This volume, the first of two volumes, is a collection of 55 unclassified/unlimited-distribution papers which were presented at the Joint Army-Navy-NASA-Air Force (JANNAF) 38th Combustion Subcommittee (CS), 26 th Airbreathing Propulsion Subcommittee (APS), 20th Propulsion Systems Hazards Subcommittee (PSHS), and 21 Modeling and Simulation Subcommittee. The meeting was held 8-12 April 2002 at the Bayside Inn at The Sandestin Golf & Beach Resort and Eglin Air Force Base, Destin, Florida. Topics cover five major technology areas including: 1) Combustion - Propellant Combustion, Ingredient Kinetics, Metal Combustion, Decomposition Processes and Material Characterization, Rocket Motor Combustion, and Liquid & Hybrid Combustion; 2) Liquid Rocket Engines - Low Cost Hydrocarbon Liquid Rocket Engines, Liquid Propulsion Turbines, Liquid Propulsion Pumps, and Staged Combustion Injector Technology; 3) Modeling & Simulation - Development of Multi- Disciplinary RBCC Modeling, Gun Modeling, and Computational Modeling for Liquid Propellant Combustion; 4) Guns Gun Propelling Charge Design, and ETC Gun Propulsion; and 5) Airbreathing - Scramjet an Ramjet- S&T Program Overviews.
NASA Astrophysics Data System (ADS)
Subasic, E.; Huang, C.; Jakumeit, J.; Hediger, F.
2015-06-01
The ongoing increase in the size and capacity of state-of-the-art wind power plants is highlighting the need to reduce the weight of critical components, such as hubs, main shaft bearing housings, gear box housings and support bases. These components are manufactured as nodular iron castings (spheroid graphite iron, or SGI). A weight reduction of up to 20% is achievable by optimizing the geometry to minimize volume, thus enabling significant downsizing of wind power plants. One method for enhancing quality control in the production of thick-walled SGI castings, and thus reducing tolerances and, consequently, enabling castings of smaller volume is via a casting simulation of mould filling and solidification based on a combination of microscopic model and VoF-multiphase approach. Coupled fluid flow with heat transport and phase transformation kinetics during solidification is described by partial differential equations and solved using the finite volume method. The flow of multiple phases is described using a volume of fluid approach. Mass conservation equations are solved separately for both liquid and solid phases. At the micro-level, the diffusion-controlled growth model for grey iron eutectic grains by Wetterfall et al. is combined with a growth model for white iron eutectic grains. The micro-solidification model is coupled with macro-transport equations via source terms in the energy and continuity equations. As a first step the methodology was applied to a simple geometry to investigate the impact of mould-filling on the grey-to-white transition prediction in nodular cast iron.
2011-11-17
Mr. Frank Salvatore, High Performance Technologies FIXED AND ROTARY WING AIRCRAFT 13274 - “CREATE-AV DaVinci : Model-Based Engineering for Systems... Tools for Reliability Improvement and Addressing Modularity Issues in Evaluation and Physical Testing”, Dr. Richard Heine, Army Materiel Systems
Imputatoin and Model-Based Updating Technique for Annual Forest Inventories
Ronald E. McRoberts
2001-01-01
The USDA Forest Service is developing an annual inventory system to establish the capability of producing annual estimates of timber volume and related variables. The inventory system features measurement of an annual sample of field plots with options for updating data for plots measured in previous years. One imputation and two model-based updating techniques are...
Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat
2015-01-01
Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.
Community-LINE Source Model (C-LINE) to estimate roadway emissions
C-LINE is a web-based model that estimates emissions and dispersion of toxic air pollutants for roadways in the U.S. This reduced-form air quality model examines what-if scenarios for changes in emissions such as traffic volume fleet mix and vehicle speed.
Effects of Pore Distributions on Ductility of Thin-Walled High Pressure Die-Cast Magnesium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Kyoo Sil; Li, Dongsheng; Sun, Xin
2013-06-01
In this paper, a microstructure-based three-dimensional (3D) finite element modeling method is adopted to investigate the effects of porosity in thin-walled high pressure die-cast (HPDC) Magnesium alloys on their ductility. For this purpose, the cross-sections of AM60 casting samples are first examined using optical microscope and X-ray tomography to obtain the general information on the pore distribution features. The experimentally observed pore distribution features are then used to generate a series of synthetic microstructure-based 3D finite element models with different pore volume fractions and pore distribution features. Shear and ductile damage models are adopted in the finite element analyses tomore » induce the fracture by element removal, leading to the prediction of ductility. The results in this study show that the ductility monotonically decreases as the pore volume fraction increases and that the effect of ‘skin region’ on the ductility is noticeable under the condition of same local pore volume fraction in the center region of the sample and its existence can be beneficial for the improvement of ductility. The further synthetic microstructure-based 3D finite element analyses are planned to investigate the effects of pore size and pore size distribution.« less
NASA Astrophysics Data System (ADS)
Li, Qin; Berman, Benjamin P.; Schumacher, Justin; Liang, Yongguang; Gavrielides, Marios A.; Yang, Hao; Zhao, Binsheng; Petrick, Nicholas
2017-03-01
Tumor volume measured from computed tomography images is considered a biomarker for disease progression or treatment response. The estimation of the tumor volume depends on the imaging system parameters selected, as well as lesion characteristics. In this study, we examined how different image reconstruction methods affect the measurement of lesions in an anthropomorphic liver phantom with a non-uniform background. Iterative statistics-based and model-based reconstructions, as well as filtered back-projection, were evaluated and compared in this study. Statistics-based and filtered back-projection yielded similar estimation performance, while model-based yielded higher precision but lower accuracy in the case of small lesions. Iterative reconstructions exhibited higher signal-to-noise ratio but slightly lower contrast of the lesion relative to the background. A better understanding of lesion volumetry performance as a function of acquisition parameters and lesion characteristics can lead to its incorporation as a routine sizing tool.
Stochastic 3D modeling of Ostwald ripening at ultra-high volume fractions of the coarsening phase
NASA Astrophysics Data System (ADS)
Spettl, A.; Wimmer, R.; Werz, T.; Heinze, M.; Odenbach, S.; Krill, C. E., III; Schmidt, V.
2015-09-01
We present a (dynamic) stochastic simulation model for 3D grain morphologies undergoing a grain coarsening phenomenon known as Ostwald ripening. For low volume fractions of the coarsening phase, the classical LSW theory predicts a power-law evolution of the mean particle size and convergence toward self-similarity of the particle size distribution; experiments suggest that this behavior holds also for high volume fractions. In the present work, we have analyzed 3D images that were recorded in situ over time in semisolid Al-Cu alloys manifesting ultra-high volume fractions of the coarsening (solid) phase. Using this information we developed a stochastic simulation model for the 3D morphology of the coarsening grains at arbitrary time steps. Our stochastic model is based on random Laguerre tessellations and is by definition self-similar—i.e. it depends only on the mean particle diameter, which in turn can be estimated at each point in time. For a given mean diameter, the stochastic model requires only three additional scalar parameters, which influence the distribution of particle sizes and their shapes. An evaluation shows that even with this minimal information the stochastic model yields an excellent representation of the statistical properties of the experimental data.
A Multivariate Twin Study of Hippocampal Volume, Self-Esteem and Well-Being in Middle Aged Men
Kubarych, Thomas S.; Prom-Wormley, Elizabeth C.; Franz, Carol E.; Panizzon, Matthew S.; Dale, Anders M.; Fischl, Bruce; Eyler, Lisa T.; Fennema-Notestine, Christine; Grant, Michael D.; Hauger, Richard L.; Hellhammer, Dirk H.; Jak, Amy J.; Jernigan, Terry L.; Lupien, Sonia J.; Lyons, Michael J.; Mendoza, Sally P.; Neale, Michael C.; Seidman, Larry J.; Tsuang, Ming T.; Kremen, William S.
2012-01-01
Self-esteem and well-being are important for successful aging, and some evidence suggests that self-esteem and well-being are associated with hippocampal volume, cognition, and stress responsivity. Whereas most of this evidence is based on studies of older adults, we investigated self-esteem, well-being and hippocampal volume in 474 male middle-age twins. Self-esteem was significantly positively correlated with hippocampal volume (.09, p=.03 for left hippocampus, .10, p=.04 for right). Correlations for well-being were not significant (ps ≫.05). There were strong phenotypic correlations between self-esteem and well-being (.72, p<.001) and between left and right hippocampal volume (.72, p<.001). In multivariate genetic analyses, a 2-factor AE model with well-being and self-esteem on one factor and left and right hippocampal volumes on the other factor fit the data better than Cholesky, independent pathway or common pathway models. The correlation between the two genetic factors was .12 (p=.03); the correlation between the environmental factors was .09 (p>05). Our results indicate that largely different genetic and environmental factors underlie self-esteem and well-being on the one hand and hippocampal volume on the other. PMID:22471516
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, Mark W., E-mail: markmcdonaldmd@gmail.com; Indiana University Health Proton Therapy Center, Bloomington, Indiana; Linton, Okechukwu R.
Purpose: We evaluated patient and treatment parameters correlated with development of temporal lobe radiation necrosis. Methods and Materials: This was a retrospective analysis of a cohort of 66 patients treated for skull base chordoma, chondrosarcoma, adenoid cystic carcinoma, or sinonasal malignancies between 2005 and 2012, who had at least 6 months of clinical and radiographic follow-up. The median radiation dose was 75.6 Gy (relative biological effectiveness [RBE]). Analyzed factors included gender, age, hypertension, diabetes, smoking status, use of chemotherapy, and the absolute dose:volume data for both the right and left temporal lobes, considered separately. A generalized estimating equation (GEE) regression analysis evaluatedmore » potential predictors of radiation necrosis, and the median effective concentration (EC50) model estimated dose–volume parameters associated with radiation necrosis. Results: Median follow-up time was 31 months (range 6-96 months) and was 34 months in patients who were alive. The Kaplan-Meier estimate of overall survival at 3 years was 84.9%. The 3-year estimate of any grade temporal lobe radiation necrosis was 12.4%, and for grade 2 or higher radiation necrosis was 5.7%. On multivariate GEE, only dose–volume relationships were associated with the risk of radiation necrosis. In the EC50 model, all dose levels from 10 to 70 Gy (RBE) were highly correlated with radiation necrosis, with a 15% 3-year risk of any-grade temporal lobe radiation necrosis when the absolute volume of a temporal lobe receiving 60 Gy (RBE) (aV60) exceeded 5.5 cm{sup 3}, or aV70 > 1.7 cm{sup 3}. Conclusions: Dose–volume parameters are highly correlated with the risk of developing temporal lobe radiation necrosis. In this study the risk of radiation necrosis increased sharply when the temporal lobe aV60 exceeded 5.5 cm{sup 3} or aV70 > 1.7 cm{sup 3}. Treatment planning goals should include constraints on the volume of temporal lobes receiving higher dose. The EC50 model provides suggested dose–volume temporal lobe constraints for conventionally fractionated high-dose skull base radiation therapy.« less
Raffield, Laura M; Cox, Amanda J; Criqui, Michael H; Hsu, Fang-Chi; Terry, James G; Xu, Jianzhao; Freedman, Barry I; Carr, J Jeffrey; Bowden, Donald W
2018-05-11
Coronary artery calcified plaque (CAC) is strongly predictive of cardiovascular disease (CVD) events and mortality, both in general populations and individuals with type 2 diabetes at high risk for CVD. CAC is typically reported as an Agatston score, which is weighted for increased plaque density. However, the role of CAC density in CVD risk prediction, independently and with CAC volume, remains unclear. We examined the role of CAC density in individuals with type 2 diabetes from the family-based Diabetes Heart Study and the African American-Diabetes Heart Study. CAC density was calculated as mass divided by volume, and associations with incident all-cause and CVD mortality [median follow-up 10.2 years European Americans (n = 902, n = 286 deceased), 5.2 years African Americans (n = 552, n = 93 deceased)] were examined using Cox proportional hazards models, independently and in models adjusted for CAC volume. In European Americans, CAC density, like Agatston score and volume, was consistently associated with increased risk of all-cause and CVD mortality (p ≤ 0.002) in models adjusted for age, sex, statin use, total cholesterol, HDL, systolic blood pressure, high blood pressure medication use, and current smoking. However, these associations were no longer significant when models were additionally adjusted for CAC volume. CAC density was not significantly associated with mortality, either alone or adjusted for CAC volume, in African Americans. CAC density is not associated with mortality independent from CAC volume in European Americans and African Americans with type 2 diabetes.
Geometric confinement influences cellular mechanical properties I -- adhesion area dependence.
Su, Judith; Jiang, Xingyu; Welsch, Roy; Whitesides, George M; So, Peter T C
2007-06-01
Interactions between the cell and the extracellular matrix regulate a variety of cellular properties and functions, including cellular rheology. In the present study of cellular adhesion, area was controlled by confining NIH 3T3 fibroblast cells to circular micropatterned islands of defined size. The shear moduli of cells adhering to islands of well defined geometry, as measured by magnetic microrheometry, was found to have a significantly lower variance than those of cells allowed to spread on unpatterned surfaces. We observe that the area of cellular adhesion influences shear modulus. Rheological measurements further indicate that cellular shear modulus is a biphasic function of cellular adhesion area with stiffness decreasing to a minimum value for intermediate areas of adhesion, and then increasing for cells on larger patterns. We propose a simple hypothesis: that the area of adhesion affects cellular rheological properties by regulating the structure of the actin cytoskeleton. To test this hypothesis, we quantified the volume fraction of polymerized actin in the cytosol by staining with fluorescent phalloidin and imaging using quantitative 3D microscopy. The polymerized actin volume fraction exhibited a similar biphasic dependence on adhesion area. Within the limits of our simplifying hypothesis, our experimental results permit an evaluation of the ability of established, micromechanical models to predict the cellular shear modulus based on polymerized actin volume fraction. We investigated the "tensegrity", "cellular-solids", and "biopolymer physics" models that have, respectively, a linear, quadratic, and 5/2 dependence on polymerized actin volume fraction. All three models predict that a biphasic trend in polymerized actin volume fraction as a function of adhesion area will result in a biphasic behavior in shear modulus. Our data favors a higher-order dependence on polymerized actin volume fraction. Increasingly better experimental agreement is observed for the tensegrity, the cellular solids, and the biopolymer models respectively. Alternatively if we postulate the existence of a critical actin volume fraction below which the shear modulus vanishes, the experimental data can be equivalently described by a model with an almost linear dependence on polymerized actin volume fraction; this observation supports a tensegrity model with a critical actin volume fraction.
Modelling compressible dense and dilute two-phase flows
NASA Astrophysics Data System (ADS)
Saurel, Richard; Chinnayya, Ashwin; Carmouze, Quentin
2017-06-01
Many two-phase flow situations, from engineering science to astrophysics, deal with transition from dense (high concentration of the condensed phase) to dilute concentration (low concentration of the same phase), covering the entire range of volume fractions. Some models are now well accepted at the two limits, but none are able to cover accurately the entire range, in particular regarding waves propagation. In the present work, an alternative to the Baer and Nunziato (BN) model [Baer, M. R. and Nunziato, J. W., "A two-phase mixture theory for the deflagration-to-detonation transition (DDT) in reactive granular materials," Int. J. Multiphase Flow 12(6), 861 (1986)], initially designed for dense flows, is built. The corresponding model is hyperbolic and thermodynamically consistent. Contrarily to the BN model that involves 6 wave speeds, the new formulation involves 4 waves only, in agreement with the Marble model [Marble, F. E., "Dynamics of a gas containing small solid particles," Combustion and Propulsion (5th AGARD Colloquium) (Pergamon Press, 1963), Vol. 175] based on pressureless Euler equations for the dispersed phase, a well-accepted model for low particle volume concentrations. In the new model, the presence of pressure in the momentum equation of the particles and consideration of volume fractions in the two phases render the model valid for large particle concentrations. A symmetric version of the new model is derived as well for liquids containing gas bubbles. This model version involves 4 characteristic wave speeds as well, but with different velocities. Last, the two sub-models with 4 waves are combined in a unique formulation, valid for the full range of volume fractions. It involves the same 6 wave speeds as the BN model, but at a given point of space, 4 waves only emerge, depending on the local volume fractions. The non-linear pressure waves propagate only in the phase with dominant volume fraction. The new model is tested numerically on various test problems ranging from separated phases in a shock tube to shock-particle cloud interaction. Its predictions are compared to BN and Marble models as well as against experimental data showing clear improvements.
Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing
NASA Astrophysics Data System (ADS)
Watanabe, T.; Nagata, K.
2016-08-01
We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting a value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES-LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.
Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, T., E-mail: watanabe.tomoaki@c.nagoya-u.jp; Nagata, K.
We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting amore » value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES–LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.« less
"Tools For Analysis and Visualization of Large Time- Varying CFD Data Sets"
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; vanGelder, Allen
1999-01-01
During the four years of this grant (including the one year extension), we have explored many aspects of the visualization of large CFD (Computational Fluid Dynamics) datasets. These have included new direct volume rendering approaches, hierarchical methods, volume decimation, error metrics, parallelization, hardware texture mapping, and methods for analyzing and comparing images. First, we implemented an extremely general direct volume rendering approach that can be used to render rectilinear, curvilinear, or tetrahedral grids, including overlapping multiple zone grids, and time-varying grids. Next, we developed techniques for associating the sample data with a k-d tree, a simple hierarchial data model to approximate samples in the regions covered by each node of the tree, and an error metric for the accuracy of the model. We also explored a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH (Association for Computing Machinery Special Interest Group on Computer Graphics) '96. In our initial implementation, we automatically image the volume from 32 approximately evenly distributed positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation.
Properties of forced convection experimental with silicon carbide based nano-fluids
NASA Astrophysics Data System (ADS)
Soanker, Abhinay
With the advent of nanotechnology, many fields of Engineering and Science took a leap to the next level of advancements. The broad scope of nanotechnology initiated many studies of heat transfer and thermal engineering. Nano-fluids are one such technology and can be thought of as engineered colloidal fluids with nano-sized colloidal particles. There are different types of nano-fluids based on the colloidal particle and base fluids. Nano-fluids can primarily be categorized into metallic, ceramics, oxide, magnetic and carbon based. The present work is a part of investigation of the thermal and rheological properties of ceramic based nano-fluids. alpha-Silicon Carbide based nano-fluid with Ethylene Glycol and water mixture 50-50% volume concentration was used as the base fluid here. This work is divided into three parts; Theoretical modelling of effective thermal conductivity (ETC) of colloidal fluids, study of Thermal and Rheological properties of alpha-SiC nano-fluids, and determining the Heat Transfer properties of alpha-SiC nano-fluids. In the first part of this work, a theoretical model for effective thermal conductivity (ETC) of static based colloidal fluids was formulated based on the particle size, shape (spherical), thermal conductivity of base fluid and that of the colloidal particle, along with the particle distribution pattern in the fluid. A MATLAB program is generated to calculate the details of this model. The model is specifically derived for least and maximum ETC enhancement possible and thereby the lower and upper bounds was determined. In addition, ETC is also calculated for uniform colloidal distribution pattern. Effect of volume concentration on ETC was studied. No effect of particle size was observed for particle sizes below a certain value. Results of this model were compared with Wiener bounds and Hashin- Shtrikman bounds. The second part of this work is a study of thermal and rheological properties of alpha-Silicon Carbide based nano-fluids. The nano-fluid properties were tested at three different volume concentrations; 0.55%, 1% and 1.6%. Thermal conductivity was measured for the three-volume concentration as function of temperature. Thermal conductivity enhancement increased with the temperature and may be attributed to increased Brownian motion of colloidal particles at higher temperatures. Measured thermal conductivity values are compared with results obtained by theoretical model derived in this work. Effect of temperature and volume concentration on viscosity was also measured and reported. Viscosity increase and related consequences are important issues for the use of nano-fluids. Extensive measurements of heat transfer and pressure drop for forced convection in circular pipes with nano-fluids was also conducted. Parameters such as heat transfer coefficient, Nusselt number, pressure drop and a thermal hydraulic performance factor that takes into account the gains made by increase in thermal conductivity as well as penalties related to increase in pressure drop are evaluated for laminar and transition flow regimes. No significant improvement in heat transfer (Nusselt number) compared to its based fluid was observed. It is also observed that the values evaluated for the thermal-hydraulic performance factor (change in heat transfer/change in pressure drop) was under unity for many flow conditions indicating poor overall applicability of SiC based nano-fluids.
NASA Astrophysics Data System (ADS)
Dalguer, Luis A.; Fukushima, Yoshimitsu; Irikura, Kojiro; Wu, Changjiang
2017-09-01
Inspired by the first workshop on Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations (BestPSHANI) conducted by the International Atomic Energy Agency (IAEA) on 18-20 November, 2015 in Vienna (http://www-pub.iaea.org/iaeameetings/50896/BestPSHANI), this PAGEOPH topical volume collects several extended articles from this workshop as well as several new contributions. A total of 17 papers have been selected on topics ranging from the seismological aspects of earthquake cycle simulations for source-scaling evaluation, seismic source characterization, source inversion and ground motion modeling (based on finite fault rupture using dynamic, kinematic, stochastic and empirical Green's functions approaches) to the engineering application of simulated ground motion for the analysis of seismic response of structures. These contributions include applications to real earthquakes and description of current practice to assess seismic hazard in terms of nuclear safety in low seismicity areas, as well as proposals for physics-based hazard assessment for critical structures near large earthquakes. Collectively, the papers of this volume highlight the usefulness of physics-based models to evaluate and understand the physical causes of observed and empirical data, as well as to predict ground motion beyond the range of recorded data. Relevant importance is given on the validation and verification of the models by comparing synthetic results with observed data and empirical models.
SToRM: A Model for Unsteady Surface Hydraulics Over Complex Terrain
Simoes, Francisco J.
2014-01-01
A two-dimensional (depth-averaged) finite volume Godunov-type shallow water model developed for flow over complex topography is presented. The model is based on an unstructured cellcentered finite volume formulation and a nonlinear strong stability preserving Runge-Kutta time stepping scheme. The numerical discretization is founded on the classical and well established shallow water equations in hyperbolic conservative form, but the convective fluxes are calculated using auto-switching Riemann and diffusive numerical fluxes. The model’s implementation within a graphical user interface is discussed. Field application of the model is illustrated by utilizing it to estimate peak flow discharges in a flooding event of historic significance in Colorado, U.S.A., in 2013.
NASA Astrophysics Data System (ADS)
Kereszturi, Gábor; Németh, Károly; Cronin, Shane J.; Agustín-Flores, Javier; Smith, Ian E. M.; Lindsay, Jan
2013-10-01
Monogenetic basaltic volcanism is characterised by a complex array of behaviours in the spatial distribution of magma output and also temporal variability in magma flux and eruptive frequency. Investigating this in detail is hindered by the difficulty in evaluating ages of volcanic events as well as volumes erupted in each volcano. Eruptive volumes are an important input parameter for volcanic hazard assessment and may control eruptive scenarios, especially transitions between explosive and effusive behaviour and the length of eruptions. Erosion, superposition and lack of exposure limit the accuracy of volume determination, even for very young volcanoes. In this study, a systematic volume estimation model is developed and applied to the Auckland Volcanic Field in New Zealand. In this model, a basaltic monogenetic volcano is categorised in six parts. Subsurface portions of volcanoes, such as diatremes beneath phreatomagmatic volcanoes, or crater infills, are approximated by geometrical considerations, based on exposed analogue volcanoes. Positive volcanic landforms, such as scoria/spatter cones, tephras rings and lava flow, were defined by using a Light Detection and Ranging (LiDAR) survey-based Digital Surface Model (DSM). Finally, the distal tephra associated with explosive eruptions was approximated using published relationships that relate original crater size to ejecta volumes. Considering only those parts with high reliability, the overall magma output (converted to Dense Rock Equivalent) for the post-250 ka active Auckland Volcanic Field in New Zealand is a minimum of 1.704 km3. This is made up of 1.329 km3 in lava flows, 0.067 km3 in phreatomagmatic crater lava infills, 0.090 km3 within tephra/tuff rings, 0.112 km3 inside crater lava infills, and 0.104 km3 within scoria cones. Using the minimum eruptive volumes, the spatial and temporal magma fluxes are estimated at 0.005 km3/km2 and 0.007 km3/ka. The temporal-volumetric evolution of Auckland is characterised by an increasing magma flux in the last 40 ky, which is inferred to be triggered by plate tectonics processes (e.g. increased asthenospheric shearing and backarc spreading of underneath the Auckland region).
A global airport-based risk model for the spread of dengue infection via the air transport network.
Gardner, Lauren; Sarkar, Sahotra
2013-01-01
The number of travel-acquired dengue infections has seen a consistent global rise over the past decade. An increased volume of international passenger air traffic originating from regions with endemic dengue has contributed to a rise in the number of dengue cases in both areas of endemicity and elsewhere. This paper reports results from a network-based risk assessment model which uses international passenger travel volumes, travel routes, travel distances, regional populations, and predictive species distribution models (for the two vector species, Aedes aegypti and Aedes albopictus) to quantify the relative risk posed by each airport in importing passengers with travel-acquired dengue infections. Two risk attributes are evaluated: (i) the risk posed by through traffic at each stopover airport and (ii) the risk posed by incoming travelers to each destination airport. The model results prioritize optimal locations (i.e., airports) for targeted dengue surveillance. The model is easily extendible to other vector-borne diseases.
A Global Airport-Based Risk Model for the Spread of Dengue Infection via the Air Transport Network
Gardner, Lauren; Sarkar, Sahotra
2013-01-01
The number of travel-acquired dengue infections has seen a consistent global rise over the past decade. An increased volume of international passenger air traffic originating from regions with endemic dengue has contributed to a rise in the number of dengue cases in both areas of endemicity and elsewhere. This paper reports results from a network-based risk assessment model which uses international passenger travel volumes, travel routes, travel distances, regional populations, and predictive species distribution models (for the two vector species, Aedes aegypti and Aedes albopictus) to quantify the relative risk posed by each airport in importing passengers with travel-acquired dengue infections. Two risk attributes are evaluated: (i) the risk posed by through traffic at each stopover airport and (ii) the risk posed by incoming travelers to each destination airport. The model results prioritize optimal locations (i.e., airports) for targeted dengue surveillance. The model is easily extendible to other vector-borne diseases. PMID:24009672
A trait-based test for habitat filtering: Convex hull volume
Cornwell, W.K.; Schwilk, D.W.; Ackerly, D.D.
2006-01-01
Community assembly theory suggests that two processes affect the distribution of trait values within communities: competition and habitat filtering. Within a local community, competition leads to ecological differentiation of coexisting species, while habitat filtering reduces the spread of trait values, reflecting shared ecological tolerances. Many statistical tests for the effects of competition exist in the literature, but measures of habitat filtering are less well-developed. Here, we present convex hull volume, a construct from computational geometry, which provides an n-dimensional measure of the volume of trait space occupied by species in a community. Combined with ecological null models, this measure offers a useful test for habitat filtering. We use convex hull volume and a null model to analyze California woody-plant trait and community data. Our results show that observed plant communities occupy less trait space than expected from random assembly, a result consistent with habitat filtering. ?? 2006 by the Ecological Society of America.
Analysis on Vertical Scattering Signatures in Forestry with PolInSAR
NASA Astrophysics Data System (ADS)
Guo, Shenglong; Li, Yang; Zhang, Jingjing; Hong, Wen
2014-11-01
We apply accurate topographic phase to the Freeman-Durden decomposition for polarimetric SAR interferometry (PolInSAR) data. The cross correlation matrix obtained from PolInSAR observations can be decomposed into three scattering mechanisms matrices accounting for the odd-bounce, double-bounce and volume scattering. We estimate the phase based on the Random volume over Ground (RVoG) model, and as the initial input parameter of the numerical method which is used to solve the parameters of decomposition. In addition, the modified volume scattering model introduced by Y. Yamaguchi is applied to the PolInSAR target decomposition in forest areas rather than the pure random volume scattering as proposed by Freeman-Durden to make best fit to the actual measured data. This method can accurately retrieve the magnitude associated with each mechanism and their vertical location along the vertical dimension. We test the algorithms with L- and P- band simulated data.
Min, Yi; Song, Ying; Gao, Yuan; Dummer, Paul M H
2016-08-01
This study aimed to present a new method based on numeric calculus to provide data on the theoretical volume ratio of voids when using the cold lateral compaction technique in canals with various diameters and tapers. Twenty-one simulated mathematical root canal models were created with different tapers and sizes of apical diameter, and were filled with defined sizes of standardized accessory gutta-percha cones. The areas of each master and accessory gutta-percha cone as well as the depth of their insertion into the canals were determined mathematically in Microsoft Excel. When the first accessory gutta-percha cone had been positioned, the residual area of void was measured. The areas of the residual voids were then measured repeatedly upon insertion of additional accessary cones until no more could be inserted in the canal. The volume ratio of voids was calculated through measurement of the volume of the root canal and mass of gutta-percha cones. The theoretical volume ratio of voids was influenced by the taper of canal, the size of apical preparation and the size of accessory gutta-percha cones. Greater apical preparation size and larger taper together with the use of smaller accessory cones reduced the volume ratio of voids in the apical third. The mathematical model provided a precise method to determine the theoretical volume ratio of voids in root-filled canals when using cold lateral compaction.
Sun, Jirun; Eidelman, Naomi; Lin-Gibson, Sheng
2009-03-01
The objectives of this study were to (1) demonstrate X-ray micro-computed tomography (microCT) as a viable method for determining the polymerization shrinkage and microleakage on the same sample accurately and non-destructively, and (2) investigate the effect of sample geometry (e.g., C-factor and volume) on polymerization shrinkage and microleakage. Composites placed in a series of model cavities of controlled C-factors and volumes were imaged using microCT to determine their precise location and volume before and after photopolymerization. Shrinkage was calculated by comparing the volume of composites before and after polymerization and leakage was predicted based on gap formation between composites and cavity walls as a function of position. Dye penetration experiments were used to validate microCT results. The degree of conversion (DC) of composites measured using FTIR microspectroscopy in reflectance mode was nearly identical for composites filled in all model cavity geometries. The shrinkage of composites calculated based on microCT results was statistically identical regardless of sample geometry. Microleakage, on the other hand, was highly dependent on the C-factor as well as the composite volume, with higher C-factors and larger volumes leading to a greater probability of microleakage. Spatial distribution of microleakage determined by microCT agreed well with results determined by dye penetration. microCT has proven to be a powerful technique in quantifying polymerization shrinkage and corresponding microleakage for clinically relevant cavity geometries.
Orbital flight test shuttle external tank aerothermal flight evaluation, volume 1
NASA Technical Reports Server (NTRS)
Praharaj, Sarat C.; Engel, Carl D.; Warmbrod, John D.
1986-01-01
This 3-volume report discusses the evaluation of aerothermal flight measurements made on the orbital flight test Space Shuttle External Tanks (ETs). Six ETs were instrumented to measure various quantities during flight; including heat transfer, pressure, and structural temperature. The flight data was reduced and analyzed against math models established from an extensive wind tunnel data base and empirical heat-transfer relationships. This analysis has supported the validity of the current aeroheating methodology and existing data base; and, has also identified some problem areas which require methodology modifications. This is Volume 1, an Executive Summary. Volume 2 contains Appendices A (Aerothermal Comparisons) and B (Flight Derived h sub 1/h sub u vs. M sub inf. Plots), and Volume 3 contains Appendix C (Comparison of Interference Factors among OFT Flight, Prediction and 1H-97A Data), Appendix D (Freestream Stanton Number and Reynolds Number Correlation for Flight and Tunnel Data), and Appendix E (Flight-Derived h sub i/h sub u Tables).
NASA Astrophysics Data System (ADS)
Li, Xiaobing; Qiu, Tianshuang; Lebonvallet, Stephane; Ruan, Su
2010-02-01
This paper presents a brain tumor segmentation method which automatically segments tumors from human brain MRI image volume. The presented model is based on the symmetry of human brain and level set method. Firstly, the midsagittal plane of an MRI volume is searched, the slices with potential tumor of the volume are checked out according to their symmetries, and an initial boundary of the tumor in the slice, in which the tumor is in the largest size, is determined meanwhile by watershed and morphological algorithms; Secondly, the level set method is applied to the initial boundary to drive the curve evolving and stopping to the appropriate tumor boundary; Lastly, the tumor boundary is projected one by one to its adjacent slices as initial boundaries through the volume for the whole tumor. The experiment results are compared with hand tracking of the expert and show relatively good accordance between both.
Orbital flight test shuttle external tank aerothermal flight evaluation, volume 3
NASA Technical Reports Server (NTRS)
Praharaj, Sarat C.; Engel, Carl D.; Warmbrod, John D.
1986-01-01
This 3-volume report discusses the evaluation of aerothermal flight measurements made on the orbital flight test Space Shuttle External Tanks (ETs). Six ETs were instrumented to measure various quantities during flight; including heat transfer, pressure, and structural temperature. The flight data was reduced and analyzed against math models established from an extensive wind tunnel data base and empirical heat-transfer relationships. This analysis has supported the validity of the current aeroheating methodology and existing data base; and, has also identified some problem areas which require methodology modifications. Volume 1 is the Executive Summary. Volume 2 contains Appendix A (Aerothermal Comparisons), and Appendix B (Flight-Derived h sub 1/h sub u vs. M sub inf. Plots). This is Volume 3, containing Appendix C (Comparison of Interference Factors between OFT Flight, Prediction and 1H-97A Data), Appendix D (Freestream Stanton Number and Reynolds Number Correlation for Flight and Tunnel Data), and Appendix E (Flight-Derived h sub i/h sub u Tables).
Orbital flight test shuttle external tank aerothermal flight evaluation, volume 2
NASA Technical Reports Server (NTRS)
Praharaj, Sarat C.; Engel, Carl D.; Warmbrod, John D.
1986-01-01
This 3-volume report discusses the evaluation of aerothermal flight measurements made on the orbital flight test Space Shuttle External Tanks (ETs). Six ETs were instrumented to measure various quantities during flight; including heat transfer, pressure, and structural temperature. The flight data was reduced and analyzed against math models established from an extensive wind tunnel data base and empirical heat-transfer relationships. This analysis has supported the validity of the current aeroheating methodology and existing data base; and, has also identified some problem areas which require methodology modifications. Volume 1 is the Executive Summary. This is volume 2, containing Appendix A (Aerothermal Comparisons), and Appendix B (Flight-Derived h sub i/h sub u vs. M sub inf. Plots). Volume 3 contains Appendix C (Comparison of Interference Factors between OFT Flight, Prediction and 1H-97A Data), Appendix D (Freestream Stanton Number and Reynolds Number Correlation for Flight and Tunnel Data), and Appendix E (Flight-Derived h sub i/h sub u Tables).
Sharabi, Shirley; Kos, Bor; Last, David; Guez, David; Daniels, Dianne; Harnof, Sagi; Mardor, Yael; Miklavcic, Damijan
2016-03-01
Electroporation-based therapies such as electrochemotherapy (ECT) and irreversible electroporation (IRE) are emerging as promising tools for treatment of tumors. When applied to the brain, electroporation can also induce transient blood-brain-barrier (BBB) disruption in volumes extending beyond IRE, thus enabling efficient drug penetration. The main objective of this study was to develop a statistical model predicting cell death and BBB disruption induced by electroporation. This model can be used for individual treatment planning. Cell death and BBB disruption models were developed based on the Peleg-Fermi model in combination with numerical models of the electric field. The model calculates the electric field thresholds for cell kill and BBB disruption and describes the dependence on the number of treatment pulses. The model was validated using in vivo experimental data consisting of rats brains MRIs post electroporation treatments. Linear regression analysis confirmed that the model described the IRE and BBB disruption volumes as a function of treatment pulses number (r(2) = 0.79; p < 0.008, r(2) = 0.91; p < 0.001). The results presented a strong plateau effect as the pulse number increased. The ratio between complete cell death and no cell death thresholds was relatively narrow (between 0.88-0.91) even for small numbers of pulses and depended weakly on the number of pulses. For BBB disruption, the ratio increased with the number of pulses. BBB disruption radii were on average 67% ± 11% larger than IRE volumes. The statistical model can be used to describe the dependence of treatment-effects on the number of pulses independent of the experimental setup.
The Search for Equity in School Finance: Michigan School District Response to a Guaranteed Tax Base.
ERIC Educational Resources Information Center
Park, Rolla Edward; Carroll, Stephen J.
Part of a three-volume report on the effects of school finance reform, this volume examines the effects of reform on Michigan school districts' budgets from 1971 to 1976. Econometric models were used. Researchers found a very small "price" effect--an elasticity of -.02. The data provide no evidence that state matching grants stimulate…
NASA Astrophysics Data System (ADS)
Kruglyakov, Mikhail; Kuvshinov, Alexey
2018-05-01
3-D interpretation of electromagnetic (EM) data of different origin and scale becomes a common practice worldwide. However, 3-D EM numerical simulations (modeling)—a key part of any 3-D EM data analysis—with realistic levels of complexity, accuracy and spatial detail still remains challenging from the computational point of view. We present a novel, efficient 3-D numerical solver based on a volume integral equation (IE) method. The efficiency is achieved by using a high-order polynomial (HOP) basis instead of the zero-order (piecewise constant) basis that is invoked in all routinely used IE-based solvers. We demonstrate that usage of the HOP basis allows us to decrease substantially the number of unknowns (preserving the same accuracy), with corresponding speed increase and memory saving.
NASA Technical Reports Server (NTRS)
Shankar, V.; Rowell, C.; Hall, W. F.; Mohammadian, A. H.; Schuh, M.; Taylor, K.
1992-01-01
Accurate and rapid evaluation of radar signature for alternative aircraft/store configurations would be of substantial benefit in the evolution of integrated designs that meet radar cross-section (RCS) requirements across the threat spectrum. Finite-volume time domain methods offer the possibility of modeling the whole aircraft, including penetrable regions and stores, at longer wavelengths on today's gigaflop supercomputers and at typical airborne radar wavelengths on the teraflop computers of tomorrow. A structured-grid finite-volume time domain computational fluid dynamics (CFD)-based RCS code has been developed at the Rockwell Science Center, and this code incorporates modeling techniques for general radar absorbing materials and structures. Using this work as a base, the goal of the CFD-based CEM effort is to define, implement and evaluate various code development issues suitable for rapid prototype signature prediction.
Lui, Y F; Ip, W Y
2016-01-01
Autogenic fat graft usually suffers from degeneration and volume shrinkage in volume reconstruction applications. How to maintain graft viability and graft volume is an essential consideration in reconstruction therapies. In this presented investigation, a new fat graft transplantation method was developed aiming to improve long term graft viability and volume reconstruction effect by incorporation of hydrogel. The harvested fat graft is dissociated into small fragments and incorporated into a collagen based hydrogel to form a hydrogel/fat graft complex for volume reconstruction purpose. In vitro results indicate that the collagen based hydrogel can significantly improve the survivability of cells inside isolated graft. In a 6-month investigation on artificial created defect model, this hydrogel/fat graft complex filler has demonstrated the ability of promoting fat pad formation inside the targeted defect area. The newly generated fat pad can cover the whole defect and restore its original dimension in 6-month time point. Compared to simple fat transplantation, this hydrogel/fat graft complex system provides much improvement on long term volume restoration effect against degeneration and volume shrinkage. One notable effect is that there is continuous proliferation of adipose tissue throughout the 6-month period. In summary, the hydrogel/fat graft system presented in this investigation demonstrated a better and more significant effect on volume reconstruction in large sized volume defect than simple fat transplantation.
Sidibé, Désiré; Sankar, Shrinivasan; Lemaître, Guillaume; Rastgoo, Mojdeh; Massich, Joan; Cheung, Carol Y; Tan, Gavin S W; Milea, Dan; Lamoureux, Ecosse; Wong, Tien Y; Mériaudeau, Fabrice
2017-02-01
This paper proposes a method for automatic classification of spectral domain OCT data for the identification of patients with retinal diseases such as Diabetic Macular Edema (DME). We address this issue as an anomaly detection problem and propose a method that not only allows the classification of the OCT volume, but also allows the identification of the individual diseased B-scans inside the volume. Our approach is based on modeling the appearance of normal OCT images with a Gaussian Mixture Model (GMM) and detecting abnormal OCT images as outliers. The classification of an OCT volume is based on the number of detected outliers. Experimental results with two different datasets show that the proposed method achieves a sensitivity and a specificity of 80% and 93% on the first dataset, and 100% and 80% on the second one. Moreover, the experiments show that the proposed method achieves better classification performance than other recently published works. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
DISCRETE VOLUME-ELEMENT METHOD FOR NETWORK WATER- QUALITY MODELS
An explicit dynamic water-quality modeling algorithm is developed for tracking dissolved substances in water-distribution networks. The algorithm is based on a mass-balance relation within pipes that considers both advective transport and reaction kinetics. Complete mixing of m...
Quantitative assessment of Urmia Lake water using spaceborne multisensor data and 3D modeling.
Jeihouni, Mehrdad; Toomanian, Ara; Alavipanah, Seyed Kazem; Hamzeh, Saeid
2017-10-18
Preserving aquatic ecosystems and water resources management is crucial in arid and semi-arid regions for anthropogenic reasons and climate change. In recent decades, the water level of the largest lake in Iran, Urmia Lake, has decreased sharply, which has become a major environmental concern in Iran and the region. The efforts to revive the lake concerns the amount of water required for restoration. This study monitored and assessed Urmia Lake status over a period of 30 years (1984 to 2014) using remotely sensed data. A novel method is proposed that generates a lakebed digital elevation model (LBDEM) for Urmia Lake based on time series images from Landsat satellites, water level field measurements, remote sensing techniques, GIS, and 3D modeling. The volume of water required to restore the Lake water level to that of previous years and the ecological water level was calculated based on LBDEM. The results indicate a marked change in the area and volume of the lake from its maximum water level in 1998 to its minimum level in 2014. During this period, 86% of the lake became a salt desert and the volume of the lake water in 2013 was just 0.83% of the 1998 volume. The volume of water required to restore Urmia Lake from benchmark status (in 2014) to ecological water level (1274.10 m) is 12.546 Bm 3 , excluding evaporation. The results and the proposed method can be used by national and international environmental organizations to monitor and assess the status of Urmia Lake and support them in decision-making.
NASA Astrophysics Data System (ADS)
Cael, B. B.
How much water do lakes on Earth hold? Global lake volume estimates are scarce, highly variable, and poorly documented. We develop a mechanistic null model for estimating global lake mean depth and volume based on a statistical topographic approach to Earth's surface. The volume-area scaling prediction is accurate and consistent within and across lake datasets spanning diverse regions. We applied these relationships to a global lake area census to estimate global lake volume and depth. The volume of Earth's lakes is 199,000 km3 (95% confidence interval 196,000-202,000 km3) . This volume is in the range of historical estimates (166,000-280,000 km3) , but the overall mean depth of 41.8 m (95% CI 41.2-42.4 m) is significantly lower than previous estimates (62 - 151 m). These results highlight and constrain the relative scarcity of lake waters in the hydrosphere and have implications for the role of lakes in global biogeochemical cycles. We also evaluate the size (area) distribution of lakes on Earth compared to expectations from percolation theory. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 2388357.
Ogburn, Sarah E.; Calder, Eliza S
2017-01-01
High concentration pyroclastic density currents (PDCs) are hot avalanches of volcanic rock and gas and are among the most destructive volcanic hazards due to their speed and mobility. Mitigating the risk associated with these flows depends upon accurate forecasting of possible impacted areas, often using empirical or physical models. TITAN2D, VolcFlow, LAHARZ, and ΔH/L or energy cone models each employ different rheologies or empirical relationships and therefore differ in appropriateness of application for different types of mass flows and topographic environments. This work seeks to test different statistically- and physically-based models against a range of PDCs of different volumes, emplaced under different conditions, over different topography in order to test the relative effectiveness, operational aspects, and ultimately, the utility of each model for use in hazard assessments. The purpose of this work is not to rank models, but rather to understand the extent to which the different modeling approaches can replicate reality in certain conditions, and to explore the dynamics of PDCs themselves. In this work, these models are used to recreate the inundation areas of the dense-basal undercurrent of all 13 mapped, land-confined, Soufrière Hills Volcano dome-collapse PDCs emplaced from 1996 to 2010 to test the relative effectiveness of different computational models. Best-fit model results and their input parameters are compared with results using observation- and deposit-derived input parameters. Additional comparison is made between best-fit model results and those using empirically-derived input parameters from the FlowDat global database, which represent “forward” modeling simulations as would be completed for hazard assessment purposes. Results indicate that TITAN2D is able to reproduce inundated areas well using flux sources, although velocities are often unrealistically high. VolcFlow is also able to replicate flow runout well, but does not capture the lateral spreading in distal regions of larger-volume flows. Both models are better at reproducing the inundated area of single-pulse, valley-confined, smaller-volume flows than sustained, highly unsteady, larger-volume flows, which are often partially unchannelized. The simple rheological models of TITAN2D and VolcFlow are not able to recreate all features of these more complex flows. LAHARZ is fast to run and can give a rough approximation of inundation, but may not be appropriate for all PDCs and the designation of starting locations is difficult. The ΔH/L cone model is also very quick to run and gives reasonable approximations of runout distance, but does not inherently model flow channelization or directionality and thus unrealistically covers all interfluves. Empirically-based models like LAHARZ and ΔH/L cones can be quick, first-approximations of flow runout, provided a database of similar flows, e.g., FlowDat, is available to properly calculate coefficients or ΔH/L. For hazard assessment purposes, geophysical models like TITAN2D and VolcFlow can be useful for producing both scenario-based or probabilistic hazard maps, but must be run many times with varying input parameters. LAHARZ and ΔH/L cones can be used to produce simple modeling-based hazard maps when run with a variety of input volumes, but do not explicitly consider the probability of occurrence of different volumes. For forward modeling purposes, the ability to derive potential input parameters from global or local databases is crucial, though important input parameters for VolcFlow cannot be empirically estimated. Not only does this work provide a useful comparison of the operational aspects and behavior of various models for hazard assessment, but it also enriches conceptual understanding of the dynamics of the PDCs themselves.
Application of Local Discretization Methods in the NASA Finite-Volume General Circulation Model
NASA Technical Reports Server (NTRS)
Yeh, Kao-San; Lin, Shian-Jiann; Rood, Richard B.
2002-01-01
We present the basic ideas of the dynamics system of the finite-volume General Circulation Model developed at NASA Goddard Space Flight Center for climate simulations and other applications in meteorology. The dynamics of this model is designed with emphases on conservative and monotonic transport, where the property of Lagrangian conservation is used to maintain the physical consistency of the computational fluid for long-term simulations. As the model benefits from the noise-free solutions of monotonic finite-volume transport schemes, the property of Lagrangian conservation also partly compensates the accuracy of transport for the diffusion effects due to the treatment of monotonicity. By faithfully maintaining the fundamental laws of physics during the computation, this model is able to achieve sufficient accuracy for the global consistency of climate processes. Because the computing algorithms are based on local memory, this model has the advantage of efficiency in parallel computation with distributed memory. Further research is yet desirable to reduce the diffusion effects of monotonic transport for better accuracy, and to mitigate the limitation due to fast-moving gravity waves for better efficiency.
NASA Astrophysics Data System (ADS)
Nikkhoo, M.; Walter, T. R.; Lundgren, P.; Prats-Iraola, P.
2015-12-01
Ground deformation at active volcanoes is one of the key precursors of volcanic unrest, monitored by InSAR and GPS techniques at high spatial and temporal resolution, respectively. Modelling of the observed displacements establishes the link between them and the underlying subsurface processes and volume change. The so-called Mogi model and the rectangular dislocation are two commonly applied analytical solutions that allow for quick interpretations based on the location, depth and volume change of pressurized spherical cavities and planar intrusions, respectively. Geological observations worldwide, however, suggest elongated, tabular or other non-equidimensional geometries for the magma chambers. How can these be modelled? Generalized models such as the Davis's point ellipsoidal cavity or the rectangular dislocation solutions, are geometrically limited and could barely improve the interpretation of data. We develop a new analytical artefact-free solution for a rectangular dislocation, which also possesses full rotational degrees of freedom. We construct a kinematic model in terms of three pairwise-perpendicular rectangular dislocations with a prescribed opening only. This model represents a generalized point source in the far field, and also performs as a finite dislocation model for planar intrusions in the near field. We show that through calculating the Eshelby's shape tensor the far-field displacements and stresses of any arbitrary triaxial ellipsoidal cavity can be reproduced by using this model. Regardless of its aspect ratios, the volume change of this model is simply the sum of the volume change of the individual dislocations. Our model can be integrated in any inversion scheme as simply as the Mogi model, profiting at the same time from the advantages of a generalized point source. After evaluating our model by using a boundary element method code, we apply it to ground displacements of the 2015 Calbuco eruption, Chile, observed by the Sentinel-1 satellite. We infer the parameters of a deflating elongated source located beneath Calbuco, and find significant differences to Mogi type solutions. The results imply that interpretations based on our model may help us better understand source characteristics, and in the case of Calubuco volcano infer a volcano-tectonic coupling mechanism.
Consistency of the free-volume approach to the homogeneous deformation of metallic glasses
NASA Astrophysics Data System (ADS)
Blétry, Marc; Thai, Minh Thanh; Champion, Yannick; Perrière, Loïc; Ochin, Patrick
2014-05-01
One of the most widely used approaches to model metallic-glasses high-temperature homogeneous deformation is the free-volume theory, developed by Cohen and Turnbull and extended by Spaepen. A simple elastoviscoplastic formulation has been proposed that allows one to determine various parameters of such a model. This approach is applied here to the results obtained by de Hey et al. on a Pd-based metallic glass. In their study, de Hey et al. were able to determine some of the parameters used in the elastoviscoplastic formulation through DSC modeling coupled with mechanical tests, and the consistency of the two viewpoints was assessed.
NASA Technical Reports Server (NTRS)
Gentz, Steven J.; Ordway, David O.; Parsons, David S.; Garrison, Craig M.; Rodgers, C. Steven; Collins, Brian W.
2015-01-01
The NASA Engineering and Safety Center (NESC) received a request to develop an analysis model based on both frequency response and wave propagation analyses for predicting shock response spectrum (SRS) on composite materials subjected to pyroshock loading. The model would account for near-field environment (approximately 9 inches from the source) dominated by direct wave propagation, mid-field environment (approximately 2 feet from the source) characterized by wave propagation and structural resonances, and far-field environment dominated by lower frequency bending waves in the structure. This document contains appendices to the Volume I report.
Ceramic component reliability with the restructured NASA/CARES computer program
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Starlinger, Alois; Gyekenyesi, John P.
1992-01-01
The Ceramics Analysis and Reliability Evaluation of Structures (CARES) integrated design program on statistical fast fracture reliability and monolithic ceramic components is enhanced to include the use of a neutral data base, two-dimensional modeling, and variable problem size. The data base allows for the efficient transfer of element stresses, temperatures, and volumes/areas from the finite element output to the reliability analysis program. Elements are divided to insure a direct correspondence between the subelements and the Gaussian integration points. Two-dimensional modeling is accomplished by assessing the volume flaw reliability with shell elements. To demonstrate the improvements in the algorithm, example problems are selected from a round-robin conducted by WELFEP (WEakest Link failure probability prediction by Finite Element Postprocessors).
NASA Astrophysics Data System (ADS)
Marc, O.; Hovius, N.; Meunier, P.; Rault, C.
2017-12-01
In tectonically active areas, earthquakes are an important trigger of landslides with significant impact on hillslopes and river evolutions. However, detailed prediction of landslides locations and properties for a given earthquakes remain difficult.In contrast we propose, landscape scale, analytical prediction of bulk coseismic landsliding, that is total landslide area and volume (Marc et al., 2016a) as well as the regional area within which most landslide must distribute (Marc et al., 2017). The prediction is based on a limited number of seismological (seismic moment, source depth) and geomorphological (landscape steepness, threshold acceleration) parameters, and therefore could be implemented in landscape evolution model aiming at engaging with erosion dynamics at the scale of the seismic cycle. To assess the model we have compiled and normalized estimates of total landslide volume, total landslide area and regional area affected by landslides for 40, 17 and 83 earthquakes, respectively. We have found that low landscape steepness systematically leads to overprediction of the total area and volume of landslides. When this effect is accounted for, the model is able to predict within a factor of 2 the landslide areas and associated volumes for about 70% of the cases in our databases. The prediction of regional area affected do not require a calibration for the landscape steepness and gives a prediction within a factor of 2 for 60% of the database. For 7 out of 10 comprehensive inventories we show that our prediction compares well with the smallest region around the fault containing 95% of the total landslide area. This is a significant improvement on a previously published empirical expression based only on earthquake moment.Some of the outliers seems related to exceptional rock mass strength in the epicentral area or shaking duration and other seismic source complexities ignored by the model. Applications include prediction on the mass balance of earthquakes and this model predicts that only earthquakes generated on a narrow range of fault sizes may cause more erosion than uplift (Marc et al., 2016b), while very large earthquakes are expected to always build topography. The model could also be used to physically calibrate hillslope erosion or perturbations to river network within landscape evolution model.
ERIC Educational Resources Information Center
Wohlstetter, Priscilla; Mohrman, Susan Albers
This document presents findings of the Assessment of School-Based Management Study, which identified the conditions in schools that promote high performance through school-based management (SBM). The study's conceptual framework was based on Edward E. Lawler's (1986) model. The high-involvement framework posits that four resources must spread…
NASA Astrophysics Data System (ADS)
Lenhard, R. J.; Rayner, J. L.; Davis, G. B.
2017-10-01
A model is presented to account for elevation-dependent residual and entrapped LNAPL above and below, respectively, the water-saturated zone when predicting subsurface LNAPL specific volume (fluid volume per unit area) and transmissivity from current and historic fluid levels in wells. Physically-based free, residual, and entrapped LNAPL saturation distributions and LNAPL relative permeabilities are integrated over a vertical slice of the subsurface to yield the LNAPL specific volumes and transmissivity. The model accounts for effects of fluctuating water tables. Hypothetical predictions are given for different porous media (loamy sand and clay loam), fluid levels in wells, and historic water-table fluctuations. It is shown the elevation range from the LNAPL-water interface in a well to the upper elevation where the free LNAPL saturation approaches zero is the same for a given LNAPL thickness in a well regardless of porous media type. Further, the LNAPL transmissivity is largely dependent on current fluid levels in wells and not historic levels. Results from the model can aid developing successful LNAPL remediation strategies and improving the design and operation of remedial activities. Results of the model also can aid in accessing the LNAPL recovery technology endpoint, based on the predicted transmissivity.
Urbanisation and 3d Spatial - a Geometric Approach
NASA Astrophysics Data System (ADS)
Duncan, E. E.; Rahman, A. Abdul
2013-09-01
Urbanisation creates immense competition for space, this may be attributed to an increase in population owing to domestic and external tourism. Most cities are constantly exploring all avenues in maximising its limited space. Hence, urban or city authorities need to plan, expand and use such three dimensional (3D) space above, on and below the city space. Thus, difficulties in property ownership and the geometric representation of the 3D city space is a major challenge. This research, investigates the concept of representing a geometric topological 3D spatial model capable of representing 3D volume parcels for man-made constructions above and below the 3D surface volume parcel. A review of spatial data models suggests that the 3D TIN (TEN) model is significant and can be used as a unified model. The concepts, logical and physical models of 3D TIN for 3D volumes using tetrahedrons as the base geometry is presented and implemented to show man-made constructions above and below the surface parcel within a user friendly graphical interface. Concepts for 3D topology and 3D analysis are discussed. Simulations of this model for 3D cadastre are implemented. This model can be adopted by most countries to enhance and streamline geometric 3D property ownership for urban centres. 3D TIN concept for spatial modelling can be adopted for the LA_Spatial part of the Land Administration Domain Model (LADM) (ISO/TC211, 2012), this satisfies the concept of 3D volumes.
Verification of Geometric Model-Based Plant Phenotyping Methods for Studies of Xerophytic Plants.
Drapikowski, Paweł; Kazimierczak-Grygiel, Ewa; Korecki, Dominik; Wiland-Szymańska, Justyna
2016-06-27
This paper presents the results of verification of certain non-contact measurement methods of plant scanning to estimate morphological parameters such as length, width, area, volume of leaves and/or stems on the basis of computer models. The best results in reproducing the shape of scanned objects up to 50 cm in height were obtained with the structured-light DAVID Laserscanner. The optimal triangle mesh resolution for scanned surfaces was determined with the measurement error taken into account. The research suggests that measuring morphological parameters from computer models can supplement or even replace phenotyping with classic methods. Calculating precise values of area and volume makes determination of the S/V (surface/volume) ratio for cacti and other succulents possible, whereas for classic methods the result is an approximation only. In addition, the possibility of scanning and measuring plant species which differ in morphology was investigated.
Verification of Geometric Model-Based Plant Phenotyping Methods for Studies of Xerophytic Plants
Drapikowski, Paweł; Kazimierczak-Grygiel, Ewa; Korecki, Dominik; Wiland-Szymańska, Justyna
2016-01-01
This paper presents the results of verification of certain non-contact measurement methods of plant scanning to estimate morphological parameters such as length, width, area, volume of leaves and/or stems on the basis of computer models. The best results in reproducing the shape of scanned objects up to 50 cm in height were obtained with the structured-light DAVID Laserscanner. The optimal triangle mesh resolution for scanned surfaces was determined with the measurement error taken into account. The research suggests that measuring morphological parameters from computer models can supplement or even replace phenotyping with classic methods. Calculating precise values of area and volume makes determination of the S/V (surface/volume) ratio for cacti and other succulents possible, whereas for classic methods the result is an approximation only. In addition, the possibility of scanning and measuring plant species which differ in morphology was investigated. PMID:27355949
Bhowmik, Arka; Repaka, Ramjee; Mulaveesala, Ravibabu; Mishra, Subhash C
2015-07-01
A theoretical study on the quantification of surface thermal response of cancerous human skin using the frequency modulated thermal wave imaging (FMTWI) technique has been presented in this article. For the first time, the use of the FMTWI technique for the detection and the differentiation of skin cancer has been demonstrated in this article. A three dimensional multilayered skin has been considered with the counter-current blood vessels in individual skin layers along with different stages of cancerous lesions based on geometrical, thermal and physical parameters available in the literature. Transient surface thermal responses of melanoma during FMTWI of skin cancer have been obtained by integrating the heat transfer model for biological tissue along with the flow model for blood vessels. It has been observed from the numerical results that, flow of blood in the subsurface region leads to a substantial alteration on the surface thermal response of the human skin. The alteration due to blood flow further causes a reduction in the performance of the thermal imaging technique during the thermal evaluation of earliest melanoma stages (small volume) compared to relatively large volume. Based on theoretical study, it has been predicted that the method is suitable for detection and differentiation of melanoma with comparatively large volume than the earliest development stages (small volume). The study has also performed phase based image analysis of the raw thermograms to resolve the different stages of melanoma volume. The phase images have been found to be clearly individuate the different development stages of melanoma compared to raw thermograms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Concurrent Tumor Segmentation and Registration with Uncertainty-based Sparse non-Uniform Graphs
Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos
2014-01-01
In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. PMID:24717540
Ferreira-Santos, Fernando; Almeida, Pedro R.; Barbosa, Fernando; Marques-Teixeira, João; Marsh, Abigail A.
2015-01-01
Research suggests psychopathy is associated with structural brain alterations that may contribute to the affective and interpersonal deficits frequently observed in individuals with high psychopathic traits. However, the regional alterations related to different components of psychopathy are still unclear. We used voxel-based morphometry to characterize the structural correlates of psychopathy in a sample of 35 healthy adults assessed with the Triarchic Psychopathy Measure. Furthermore, we examined the regional grey matter alterations associated with the components described by the triarchic model. Our results showed that, after accounting for variation in total intracranial volume, age and IQ, overall psychopathy was negatively associated with grey matter volume in the left putamen and amygdala. Additional regression analysis with anatomical regions of interests revealed total triPM score was also associated with increased lateral orbitofrontal cortex (OFC) and caudate volume. Boldness was positively associated with volume in the right insula. Meanness was positively associated with lateral OFC and striatum volume, and negatively associated with amygdala volume. Finally, disinhibition was negatively associated with amygdala volume. Results highlight the contribution of both subcortical and cortical brain alterations for subclinical psychopathy and are discussed in light of prior research and theoretical accounts about the neurobiological bases of psychopathic traits. PMID:25971600
This report provides detailed comparisons and sensitivity analyses of three candidate models, MESOPLUME, MESOPUFF, and MESOGRID. This was not a validation study; there was no suitable regional air quality data base for the Four Corners area. Rather, the models have been evaluated...
Method and apparatus for modeling interactions
Xavier, Patrick G.
2002-01-01
The present invention provides a method and apparatus for modeling interactions that overcomes drawbacks. The method of the present invention comprises representing two bodies undergoing translations by two swept volume representations. Interactions such as nearest approach and collision can be modeled based on the swept body representations. The present invention is more robust and allows faster modeling than previous methods.
Zhong-xiang, Feng; Shi-sheng, Lu; Wei-hua, Zhang; Nan-nan, Zhang
2014-01-01
In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability. PMID:25610454
Feng, Zhong-xiang; Lu, Shi-sheng; Zhang, Wei-hua; Zhang, Nan-nan
2014-01-01
In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability.
Multiscale Modeling of Carbon Nanotube-Epoxy Nanocomposites
NASA Astrophysics Data System (ADS)
Fasanella, Nicholas A.
Epoxy-composites are widely used in the aerospace industry. In order to improve upon stiffness and thermal conductivity; carbon nanotube additives to epoxies are being explored. This dissertation presents multiscale modeling techniques to study the engineering properties of single walled carbon nanotube (SWNT)-epoxy nanocomposites, consisting of pristine and covalently functionalized systems. Using Molecular Dynamics (MD), thermomechanical properties were calculated for a representative polymer unit cell. Finite Element (FE) and orientation distribution function (ODF) based methods were used in a multiscale framework to obtain macroscale properties. An epoxy network was built using the dendrimer growth approach. The epoxy model was verified by matching the experimental glass transition temperature, density, and dilatation. MD, via the constant valence force field (CVFF), was used to explore the mechanical and dilatometric effects of adding pristine and functionalized SWNTs to epoxy. Full stiffness matrices and linear coefficient of thermal expansion vectors were obtained. The Green-Kubo method was used to investigate the thermal conductivity as a function of temperature for the various nanocomposites. Inefficient phonon transport at the ends of nanotubes is an important factor in the thermal conductivity of the nanocomposites, and for this reason discontinuous nanotubes were modeled in addition to long nanotubes. To obtain continuum-scale elastic properties from the MD data, multiscale modeling was considered to give better control over the volume fraction of nanotubes, and investigate the effects of nanotube alignment. Two methods were considered; an FE based method, and an ODF based method. The FE method probabilistically assigned elastic properties of elements from the MD lattice results based on the desired volume fraction and alignment of the nanotubes. For the ODF method, a distribution function was generated based on the desired amount of nanotube alignment; and the stiffness matrix was calculated. A rule of mixture approach was implemented in the ODF model to vary the SWNT volume fraction. Both the ODF and FE models are compared and contrasted. ODF analysis is significantly faster for nanocomposites and is a novel contribution in this thesis. Multiscale modeling allows for the effects of nanofillers in epoxy systems to be characterized without having to run costly experiments.
NASA Astrophysics Data System (ADS)
Safaei, Hadi; Emami, Mohsen Davazdah; Jazi, Hamidreza Salimi; Mostaghimi, Javad
2017-12-01
Applications of hollow spherical particles in thermal spraying process have been developed in recent years, accompanied by attempts in the form of experimental and numerical studies to better understand the process of impact of a hollow droplet on a surface. During such process, volume and density of the trapped gas inside droplet change. The numerical models should be able to simulate such changes and their consequent effects. The aim of this study is to numerically simulate the impact of a hollow ZrO2 droplet on a flat surface using the volume of fluid technique for compressible flows. An open-source, finite-volume-based CFD code was used to perform the simulations, where appropriate subprograms were added to handle the studied cases. Simulation results were compared with the available experimental data. Results showed that at high impact velocities ( U 0 > 100 m/s), the compression of trapped gas inside droplet played a significant role in the impact dynamics. In such velocities, the droplet splashed explosively. Compressibility effects result in a more porous splat, compared to the corresponding incompressible model. Moreover, the compressible model predicted a higher spread factor than the incompressible model, due to planetary structure of the splat.
Wang, Y X; Ngo, H H; Guo, W S
2015-11-15
The studied bamboo based activated carbon (BbAC) with high specific surface area (SSA) and high micro pore volume was prepared from bamboo scraps by the combined activation of H3PO4 and K2CO3. The BbAC was characterized based on the N2 adsorption isotherm at 77K. The results showed that the SSA and pore volume of BbAC increased with increasing impregnation ratio and reached maxima at the impregnation ratio of 3:1 at 750°C. Under these optimal conditions, the BbAC obtained could have a maximum SSA of 2237 m(2)/g and a maximum total pore volume of 1.23 cm(3)/g with the micro pore ratio of more than 90%. The adsorption performance of ciprofloxacin (CIP) on the BbAC was determined at 298 K. The Langmuir and Freundlich models were employed to describe the adsorption equilibrium and the kinetic data were fitted by pseudo first-order and pseudo second-order kinetic models. The results showed that the Langmuir model and the pseudo second-order kinetic model presented better fittings for the adsorption equilibrium and kinetics data, respectively. The maximum adsorption amount of CIP (613 mg/g) on the BbAC was much higher than the report in the literature. Conclusively, the BbAC could be a promising adsorption material for CIP removal from water. Copyright © 2015 Elsevier B.V. All rights reserved.
A Structural Molar Volume Model for Oxide Melts Part III: Fe Oxide-Containing Melts
NASA Astrophysics Data System (ADS)
Thibodeau, Eric; Gheribi, Aimen E.; Jung, In-Ho
2016-04-01
As part III of this series, the model is extended to iron oxide-containing melts. All available experimental data in the FeO-Fe2O3-Na2O-K2O-MgO-CaO-MnO-Al2O3-SiO2 system were critically evaluated based on the experimental condition. The variations of FeO and Fe2O3 in the melts were taken into account by using FactSage to calculate the Fe2+/Fe3+ distribution. The molar volume model with unary and binary model parameters can be used to predict the molar volume of the molten oxide of the Li2O-Na2O-K2O-MgO-CaO-MnO-PbO-FeO-Fe2O3-Al2O3-SiO2 system in the entire range of compositions, temperatures, and oxygen partial pressures from Fe saturation to 1 atm pressure.
Development of the Joint NASA/NCAR General Circulation Model
NASA Technical Reports Server (NTRS)
Lin, S.-J.; Rood, R. B.
1999-01-01
The Data Assimilation Office at NASA/Goddard Space Flight Center is collaborating with NCAR/CGD in an ambitious proposal for the development of a unified climate, numerical weather prediction, and chemistry transport model which is suitable for global data assimilation of the physical and chemical state of the Earth's atmosphere. A prototype model based on the NCAR CCM3 physics and the NASA finite-volume dynamical core has been built. A unique feature of the NASA finite-volume dynamical core is its advanced tracer transport algorithm on the floating Lagrangian control-volume coordinate. The model currently has a highly idealized ozone production/loss chemistry derived from the observed 2D (latitude-height) climatology of the recent decades. Nevertheless, the simulated horizontal wave structure of the total ozone is in good qualitative agreement with the observed (TOMS). Long term climate simulations and NWP experiments have been carried out. Current up to date status and futur! e plan will be discussed in the conference.
Taylor, P. R.; Baker, R. E.; Simpson, M. J.; Yates, C. A.
2016-01-01
Numerous processes across both the physical and biological sciences are driven by diffusion. Partial differential equations are a popular tool for modelling such phenomena deterministically, but it is often necessary to use stochastic models to accurately capture the behaviour of a system, especially when the number of diffusing particles is low. The stochastic models we consider in this paper are ‘compartment-based’: the domain is discretized into compartments, and particles can jump between these compartments. Volume-excluding effects (crowding) can be incorporated by blocking movement with some probability. Recent work has established the connection between fine- and coarse-grained models incorporating volume exclusion, but only for uniform lattices. In this paper, we consider non-uniform, hybrid lattices that incorporate both fine- and coarse-grained regions, and present two different approaches to describe the interface of the regions. We test both techniques in a range of scenarios to establish their accuracy, benchmarking against fine-grained models, and show that the hybrid models developed in this paper can be significantly faster to simulate than the fine-grained models in certain situations and are at least as fast otherwise. PMID:27383421
DOT National Transportation Integrated Search
1975-05-01
The report describes an analytical approach to estimation of fuel consumption in rail transportation, and provides sample computer calculations suggesting the sensitivity of fuel usage to various parameters. The model used is based upon careful delin...
DOT National Transportation Integrated Search
1977-01-01
Auto production and operation consume energy, material, capital and labor resources. Numerous substitution possibilities exist within and between resource sectors, corresponding to the broad spectrum of potential design technologies. Alternative auto...
Effect of environmental factors on pavement deterioration : Final report, Volume II of II
DOT National Transportation Integrated Search
1988-11-01
A computerized model for the determination of pavement deterioration responsibilities due to load and non-load related factors was developed. The model is based on predicted pavement performance and the relationship of pavement performance to a quant...
Effect of environmental factors on pavement deterioration : Final report, Volume I of II.
DOT National Transportation Integrated Search
1988-11-01
A computerized model for the determination of pavement deterioration responsibilities due to load and non-load related factors was developed. The model is based on predicted pavement performance and the relationship of pavement performance to a quant...
Mazel, Vincent; Busignies, Virginie; Duca, Stéphane; Leclerc, Bernard; Tchoreloff, Pierre
2011-05-30
In the pharmaceutical industry, tablets are obtained by the compaction of two or more components which have different physical properties and compaction behaviours. Therefore, it could be interesting to predict the physical properties of the mixture using the single-component results. In this paper, we have focused on the prediction of the compressibility of binary mixtures using the Kawakita model. Microcrystalline cellulose (MCC) and L-alanine were compacted alone and mixed at different weight fractions. The volume reduction, as a function of the compaction pressure, was acquired during the compaction process ("in-die") and after elastic recovery ("out-of-die"). For the pure components, the Kawakita model is well suited to the description of the volume reduction. For binary mixtures, an original approach for the prediction of the volume reduction without using the effective Kawakita parameters was proposed and tested. The good agreement between experimental and predicted data proved that this model was efficient to predict the volume reduction of MCC and L-alanine mixtures during compaction experiments. Copyright © 2011 Elsevier B.V. All rights reserved.
Dumas, J L; Lorchel, F; Perrot, Y; Aletti, P; Noel, A; Wolf, D; Courvoisier, P; Bosset, J F
2007-03-01
The goal of our study was to quantify the limits of the EUD models for use in score functions in inverse planning software, and for clinical application. We focused on oesophagus cancer irradiation. Our evaluation was based on theoretical dose volume histograms (DVH), and we analyzed them using volumetric and linear quadratic EUD models, average and maximum dose concepts, the linear quadratic model and the differential area between each DVH. We evaluated our models using theoretical and more complex DVHs for the above regions of interest. We studied three types of DVH for the target volume: the first followed the ICRU dose homogeneity recommendations; the second was built out of the first requirements and the same average dose was built in for all cases; the third was truncated by a small dose hole. We also built theoretical DVHs for the organs at risk, in order to evaluate the limits of, and the ways to use both EUD(1) and EUD/LQ models, comparing them to the traditional ways of scoring a treatment plan. For each volume of interest we built theoretical treatment plans with differences in the fractionation. We concluded that both volumetric and linear quadratic EUDs should be used. Volumetric EUD(1) takes into account neither hot-cold spot compensation nor the differences in fractionation, but it is more sensitive to the increase of the irradiated volume. With linear quadratic EUD/LQ, a volumetric analysis of fractionation variation effort can be performed.
NASA Astrophysics Data System (ADS)
Huang, Haijun; Shu, Da; Fu, Yanan; Zhu, Guoliang; Wang, Donghong; Dong, Anping; Sun, Baode
2018-06-01
The size of cavitation region is a key parameter to estimate the metallurgical effect of ultrasonic melt treatment (UST) on preferential structure refinement. We present a simple numerical model to predict the characteristic length of the cavitation region, termed cavitation depth, in a metal melt. The model is based on wave propagation with acoustic attenuation caused by cavitation bubbles which are dependent on bubble characteristics and ultrasonic intensity. In situ synchrotron X-ray imaging of cavitation bubbles has been made to quantitatively measure the size of cavitation region and volume fraction and size distribution of cavitation bubbles in an Al-Cu melt. The results show that cavitation bubbles maintain a log-normal size distribution, and the volume fraction of cavitation bubbles obeys a tanh function with the applied ultrasonic intensity. Using the experimental values of bubble characteristics as input, the predicted cavitation depth agrees well with observations except for a slight deviation at higher acoustic intensities. Further analysis shows that the increase of bubble volume and bubble size both leads to higher attenuation by cavitation bubbles, and hence, smaller cavitation depth. The current model offers a guideline to implement UST, especially for structural refinement.
NASA Astrophysics Data System (ADS)
Huang, Haijun; Shu, Da; Fu, Yanan; Zhu, Guoliang; Wang, Donghong; Dong, Anping; Sun, Baode
2018-04-01
The size of cavitation region is a key parameter to estimate the metallurgical effect of ultrasonic melt treatment (UST) on preferential structure refinement. We present a simple numerical model to predict the characteristic length of the cavitation region, termed cavitation depth, in a metal melt. The model is based on wave propagation with acoustic attenuation caused by cavitation bubbles which are dependent on bubble characteristics and ultrasonic intensity. In situ synchrotron X-ray imaging of cavitation bubbles has been made to quantitatively measure the size of cavitation region and volume fraction and size distribution of cavitation bubbles in an Al-Cu melt. The results show that cavitation bubbles maintain a log-normal size distribution, and the volume fraction of cavitation bubbles obeys a tanh function with the applied ultrasonic intensity. Using the experimental values of bubble characteristics as input, the predicted cavitation depth agrees well with observations except for a slight deviation at higher acoustic intensities. Further analysis shows that the increase of bubble volume and bubble size both leads to higher attenuation by cavitation bubbles, and hence, smaller cavitation depth. The current model offers a guideline to implement UST, especially for structural refinement.
Pimenta, A F R; Valente, A; Pereira, J M C; Pereira, J C F; Filipe, H P; Mata, J L G; Colaço, R; Saramago, B; Serro, A P
2016-12-01
Currently, most in vitro drug release studies for ophthalmic applications are carried out in static sink conditions. Although this procedure is simple and useful to make comparative studies, it does not describe adequately the drug release kinetics in the eye, considering the small tear volume and flow rates found in vivo. In this work, a microfluidic cell was designed and used to mimic the continuous, volumetric flow rate of tear fluid and its low volume. The suitable operation of the cell, in terms of uniformity and symmetry of flux, was proved using a numerical model based in the Navier-Stokes and continuity equations. The release profile of a model system (a hydroxyethyl methacrylate-based hydrogel (HEMA/PVP) for soft contact lenses (SCLs) loaded with diclofenac) obtained with the microfluidic cell was compared with that obtained in static conditions, showing that the kinetics of release in dynamic conditions is slower. The application of the numerical model demonstrated that the designed cell can be used to simulate the drug release in the whole range of the human eye tear film volume and allowed to estimate the drug concentration in the volume of liquid in direct contact with the hydrogel. The knowledge of this concentration, which is significantly different from that measured in the experimental tests during the first hours of release, is critical to predict the toxicity of the drug release system and its in vivo efficacy. In conclusion, the use of the microfluidic cell in conjunction with the numerical model shall be a valuable tool to design and optimize new therapeutic drug-loaded SCLs.
3D-information fusion from very high resolution satellite sensors
NASA Astrophysics Data System (ADS)
Krauss, T.; d'Angelo, P.; Kuschk, G.; Tian, J.; Partovi, T.
2015-04-01
In this paper we show the pre-processing and potential for environmental applications of very high resolution (VHR) satellite stereo imagery like these from WorldView-2 or Pl'eiades with ground sampling distances (GSD) of half a metre to a metre. To process such data first a dense digital surface model (DSM) has to be generated. Afterwards from this a digital terrain model (DTM) representing the ground and a so called normalized digital elevation model (nDEM) representing off-ground objects are derived. Combining these elevation based data with a spectral classification allows detection and extraction of objects from the satellite scenes. Beside the object extraction also the DSM and DTM can directly be used for simulation and monitoring of environmental issues. Examples are the simulation of floodings, building-volume and people estimation, simulation of noise from roads, wave-propagation for cellphones, wind and light for estimating renewable energy sources, 3D change detection, earthquake preparedness and crisis relief, urban development and sprawl of informal settlements and much more. Also outside of urban areas volume information brings literally a new dimension to earth oberservation tasks like the volume estimations of forests and illegal logging, volume of (illegal) open pit mining activities, estimation of flooding or tsunami risks, dike planning, etc. In this paper we present the preprocessing from the original level-1 satellite data to digital surface models (DSMs), corresponding VHR ortho images and derived digital terrain models (DTMs). From these components we present how a monitoring and decision fusion based 3D change detection can be realized by using different acquisitions. The results are analyzed and assessed to derive quality parameters for the presented method. Finally the usability of 3D information fusion from VHR satellite imagery is discussed and evaluated.
NASA Astrophysics Data System (ADS)
Wu, Ming; Wu, Jianfeng; Wu, Jichun
2017-10-01
When the dense nonaqueous phase liquid (DNAPL) comes into the subsurface environment, its migration behavior is crucially affected by the permeability and entry pressure of subsurface porous media. A prerequisite for accurately simulating DNAPL migration in aquifers is then the determination of the permeability, entry pressure and corresponding representative elementary volumes (REV) of porous media. However, the permeability, entry pressure and corresponding representative elementary volumes (REV) are hard to determine clearly. This study utilizes the light transmission micro-tomography (LTM) method to determine the permeability and entry pressure of two dimensional (2D) translucent porous media and integrates the LTM with a criterion of relative gradient error to quantify the corresponding REV of porous media. As a result, the DNAPL migration in porous media might be accurately simulated by discretizing the model at the REV dimension. To validate the quantification methods, an experiment of perchloroethylene (PCE) migration is conducted in a two-dimensional heterogeneous bench-scale aquifer cell. Based on the quantifications of permeability, entry pressure and REV scales of 2D porous media determined by the LTM and relative gradient error, different models with different sizes of discretization grid are used to simulate the PCE migration. It is shown that the model based on REV size agrees well with the experimental results over the entire migration period including calibration, verification and validation processes. This helps to better understand the microstructures of porous media and achieve accurately simulating DNAPL migration in aquifers based on the REV estimation.
SU-E-T-429: Uncertainties of Cell Surviving Fractions Derived From Tumor-Volume Variation Curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvetsov, A
2014-06-01
Purpose: To evaluate uncertainties of cell surviving fraction reconstructed from tumor-volume variation curves during radiation therapy using sensitivity analysis based on linear perturbation theory. Methods: The time dependent tumor-volume functions V(t) have been calculated using a twolevel cell population model which is based on the separation of entire tumor cell population in two subpopulations: oxygenated viable and lethally damaged cells. The sensitivity function is defined as S(t)=[δV(t)/V(t)]/[δx/x] where δV(t)/V(t) is the time dependent relative variation of the volume V(t) and δx/x is the relative variation of the radiobiological parameter x. The sensitivity analysis was performed using direct perturbation method wheremore » the radiobiological parameter x was changed by a certain error and the tumor-volume was recalculated to evaluate the corresponding tumor-volume variation. Tumor volume variation curves and sensitivity functions have been computed for different values of cell surviving fractions from the practically important interval S{sub 2}=0.1-0.7 using the two-level cell population model. Results: The sensitivity functions of tumor-volume to cell surviving fractions achieved a relatively large value of 2.7 for S{sub 2}=0.7 and then approached zero as S{sub 2} is approaching zero Assuming a systematic error of 3-4% we obtain that the relative error in S{sub 2} is less that 20% in the range S2=0.4-0.7. This Resultis important because the large values of S{sub 2} are associated with poor treatment outcome should be measured with relatively small uncertainties. For the very small values of S2<0.3, the relative error can be larger than 20%; however, the absolute error does not increase significantly. Conclusion: Tumor-volume curves measured during radiotherapy can be used for evaluation of cell surviving fractions usually observed in radiation therapy with conventional fractionation.« less
Ebacher, G; Besner, M C; Clément, B; Prévost, M
2012-09-01
Intrusion events caused by transient low pressures may result in the contamination of a water distribution system (DS). This work aims at estimating the range of potential intrusion volumes that could result from a real downsurge event caused by a momentary pump shutdown. A model calibrated with transient low pressure recordings was used to simulate total intrusion volumes through leakage orifices and submerged air vacuum valves (AVVs). Four critical factors influencing intrusion volumes were varied: the external head of (untreated) water on leakage orifices, the external head of (untreated) water on submerged air vacuum valves, the leakage rate, and the diameter of AVVs' outlet orifice (represented by a multiplicative factor). Leakage orifices' head and AVVs' orifice head levels were assessed through fieldwork. Two sets of runs were generated as part of two statistically designed experiments. A first set of 81 runs was based on a complete factorial design in which each factor was varied over 3 levels. A second set of 40 runs was based on a latin hypercube design, better suited for experimental runs on a computer model. The simulations were conducted using commercially available transient analysis software. Responses, measured by total intrusion volumes, ranged from 10 to 366 L. A second degree polynomial was used to analyze the total intrusion volumes. Sensitivity analyses of both designs revealed that the relationship between the total intrusion volume and the four contributing factors is not monotonic, with the AVVs' orifice head being the most influential factor. When intrusion through both pathways occurs concurrently, interactions between the intrusion flows through leakage orifices and submerged AVVs influence intrusion volumes. When only intrusion through leakage orifices is considered, the total intrusion volume is more largely influenced by the leakage rate than by the leakage orifices' head. The latter mainly impacts the extent of the area affected by intrusion. Copyright © 2012 Elsevier Ltd. All rights reserved.
SToRM: A Model for 2D environmental hydraulics
Simões, Francisco J. M.
2017-01-01
A two-dimensional (depth-averaged) finite volume Godunov-type shallow water model developed for flow over complex topography is presented. The model, SToRM, is based on an unstructured cell-centered finite volume formulation and on nonlinear strong stability preserving Runge-Kutta time stepping schemes. The numerical discretization is founded on the classical and well established shallow water equations in hyperbolic conservative form, but the convective fluxes are calculated using auto-switching Riemann and diffusive numerical fluxes. Computational efficiency is achieved through a parallel implementation based on the OpenMP standard and the Fortran programming language. SToRM’s implementation within a graphical user interface is discussed. Field application of SToRM is illustrated by utilizing it to estimate peak flow discharges in a flooding event of the St. Vrain Creek in Colorado, U.S.A., in 2013, which reached 850 m3/s (~30,000 f3 /s) at the location of this study.
Computational analysis of species transport and electrochemical characteristics of a MOLB-type SOFC
NASA Astrophysics Data System (ADS)
Hwang, J. J.; Chen, C. K.; Lai, D. Y.
A multi-physics model coupling electrochemical kinetics with fluid dynamics has been developed to simulate the transport phenomena in mono-block-layer built (MOLB) solid oxide fuel cells (SOFC). A typical MOLB module is composed of trapezoidal flow channels, corrugated positive electrode-electrolyte-negative electrode (PEN) plates, and planar inter-connecters. The control volume-based finite difference method is employed for calculation, which is based on the conservation of mass, momentum, energy, species, and electric charge. In the porous electrodes, the flow momentum is governed by a Darcy model with constant porosity and permeability. The diffusion of reactants follows the Bruggman model. The chemistry within the plates is described via surface reactions with a fixed surface-to-volume ratio, tortuosity and average pore size. Species transports as well as the local variations of electrochemical characteristics, such as overpotential and current density distributions in the electrodes of an MOLB SOFC, are discussed in detail.
Model-based monitoring of stormwater runoff quality.
Birch, Heidi; Vezzaro, Luca; Mikkelsen, Peter Steen
2013-01-01
Monitoring of micropollutants (MP) in stormwater is essential to evaluate the impacts of stormwater on the receiving aquatic environment. The aim of this study was to investigate how different strategies for monitoring of stormwater quality (combining a model with field sampling) affect the information obtained about MP discharged from the monitored system. A dynamic stormwater quality model was calibrated using MP data collected by automatic volume-proportional sampling and passive sampling in a storm drainage system on the outskirts of Copenhagen (Denmark) and a 10-year rain series was used to find annual average (AA) and maximum event mean concentrations. Use of this model reduced the uncertainty of predicted AA concentrations compared to a simple stochastic method based solely on data. The predicted AA concentration, obtained by using passive sampler measurements (1 month installation) for calibration of the model, resulted in the same predicted level but with narrower model prediction bounds than by using volume-proportional samples for calibration. This shows that passive sampling allows for a better exploitation of the resources allocated for stormwater quality monitoring.
Method and apparatus for modeling interactions
Xavier, Patrick G.
2000-08-08
A method and apparatus for modeling interactions between bodies. The method comprises representing two bodies undergoing translations and rotations by two hierarchical swept volume representations. Interactions such as nearest approach and collision can be modeled based on the swept body representations. The present invention can serve as a practical tool in motion planning, CAD systems, simulation systems, safety analysis, and applications that require modeling time-based interactions. A body can be represented in the present invention by a union of convex polygons and convex polyhedra. As used generally herein, polyhedron includes polygon, and polyhedra includes polygons. The body undergoing translation can be represented by a swept body representation, where the swept body representation comprises a hierarchical bounding volume representation whose leaves each contain a representation of the region swept by a section of the body during the translation, and where the union of the regions is a superset of the region swept by the surface of the body during translation. Interactions between two bodies thus represented can be modeled by modeling interactions between the convex hulls of the finite sets of discrete points in the swept body representations.
Multiscale Modeling of Cell Interaction in Angiogenesis: From the Micro- to Macro-scale
NASA Astrophysics Data System (ADS)
Pillay, Samara; Maini, Philip; Byrne, Helen
Solid tumors require a supply of nutrients to grow in size. To this end, tumors induce the growth of new blood vessels from existing vasculature through the process of angiogenesis. In this work, we use a discrete agent-based approach to model the behavior of individual endothelial cells during angiogenesis. We incorporate crowding effects through volume exclusion, motility of cells through biased random walks, and include birth and death processes. We use the transition probabilities associated with the discrete models to determine collective cell behavior, in terms of partial differential equations, using a Markov chain and master equation framework. We find that the cell-level dynamics gives rise to a migrating cell front in the form of a traveling wave on the macro-scale. The behavior of this front depends on the cell interactions that are included and the extent to which volume exclusion is taken into account in the discrete micro-scale model. We also find that well-established continuum models of angiogenesis cannot distinguish between certain types of cell behavior on the micro-scale. This may impact drug development strategies based on these models.
A motor-driven syringe-type gradient maker for forming immobilized pH gradient gels.
Fawcett, J S; Sullivan, J V; Chidakel, B E; Chrambach, A
1988-05-01
A motor driven gradient maker based on the commercial model (Jule Inc., Trumbull, CT) was designed for immobilized pH gradient gels to provide small volumes, rapid stirring and delivery, strict volume and temperature control and air exclusion. The device was constructed and by a convenient procedure yields highly reproducible gradients either in solution or on polyacrylamide gels.
Evaluation of Morphological Plasticity in the Cerebella of Basketball Players with MRI
Park, In Sung; Han, Jong Woo; Lee, Kea Joo; Lee, Nam Joon; Lee, Won Teak; Park, Kyung Ah
2006-01-01
Cerebellum is a key structure involved in motor learning and coordination. In animal models, motor skill learning increased the volume of molecular layer and the number of synapses on Purkinje cells in the cerebellar cortex. The aim of this study is to investigate whether the analogous change of cerebellar volume occurs in human population who learn specialized motor skills and practice them intensively for a long time. Magnetic resonance image (MRI)-based cerebellar volumetry was performed in basketball players and matched controls with V-works image software. Total brain volume, absolute and relative cerebellar volumes were compared between two groups. There was no significant group difference in the total brain volume, the absolute and the relative cerebellar volume. Thus we could not detect structural change in the cerebellum of this athlete group in the macroscopic level. PMID:16614526
NASA Astrophysics Data System (ADS)
Aghaei, Alireza; Khorasanizadeh, Hossein; Sheikhzadeh, Ghanbar Ali
2018-01-01
The main objectives of this study have been measurement of the dynamic viscosity of CuO-MWCNTs/SAE 5w-50 hybrid nanofluid, utilization of artificial neural networks (ANN) and development of a new viscosity model. The new nanofluid has been prepared by a two-stage procedure with volume fractions of 0.05, 0.1, 0.25, 0.5, 0.75 and 1%. Then, utilizing a Brookfield viscometer, its dynamic viscosity has been measured for temperatures of 5, 15, 25, 35, 45, 55 °C. The experimental results demonstrate that the viscosity increases by increasing the nanoparticles volume fraction and decreases by increasing temperature. Based on the experimental data the maximum and minimum nanofluid viscosity enhancements, when the volume fraction increases from 0.05 to 1, are 35.52% and 12.92% for constant temperatures of 55 and 15 °C, respectively. The higher viscosity of oil engine in higher temperatures is an advantage, thus this result is important. The measured nanofluid viscosity magnitudes in various shear rates show that this hybrid nanofluid is Newtonian. An ANN model has been employed to predict the viscosity of the CuO-MWCNTs/SAE 5w-50 hybrid nanofluid and the results showed that the ANN can estimate the viscosity efficiently and accurately. Eventually, for viscosity estimation a new temperature and volume fraction based third-degree polynomial empirical model has been developed. The comparison shows that this model is in good agreement with the experimental data.
Wavelet analysis of head acceleration response under dirac excitation for early oedema detection.
Kostopoulos, V; Loutas, T H; Derdas, C; Douzinas, E
2008-04-01
The present work deals with the application of an innovative in-house developed wavelet-based methodology for the analysis of the acceleration responses of a human head complex model as a simulated diffused oedema progresses. The human head complex has been modeled as a structure consisting of three confocal prolate spheroids, whereas the three defined regions by the system of spheroids, from the outside to the inside, represent the scull, the region of cerebrospinal fluid, and the brain tissue. A Dirac-like pulse has been used to excite the human head complex model and the acceleration response of the system has been calculated and analyzed via the wavelet-based methodology. For the purpose of the present analysis, a wave propagation commercial finite element code, LS-DYNA 3D, has been used. The progressive diffused oedema was modeled via consecutive increases in brain volume accompanied by a decrease in brain density. It was shown that even a small increase in brain volume (at the level of 0.5%) can be identified by the effect it has on the vibration characteristics of the human head complex. More precisely, it was found that for some of the wavelet decomposition levels, the energy content changes monotonically as the brain volume increases, thus providing a useful index of monitoring an oncoming brain oedema before any brain damage appears due to uncontrolled intracranial hypertension. For the purpose of the present work and for the levels of brain volume increase considered in the present analysis, no pressure increase was assumed into the cranial vault and, associatively, no brain compliance variation.
Melt production in large-scale impact events: Implications and observations at terrestrial craters
NASA Technical Reports Server (NTRS)
Grieve, Richard A. F.; Cintala, Mark J.
1992-01-01
The volume of impact melt relative to the volume of the transient cavity increases with the size of the impact event. Here, we use the impact of chondrite into granite at 15, 25, and 50 km s(sup -1) to model impact-melt volumes at terrestrial craters in crystalline targets and explore the implications for terrestrial craters. Figures are presented that illustrate the relationships between melt volume and final crater diameter D(sub R) for observed terrestrial craters in crystalline targets; also included are model curves for the three different impact velocities. One implication of the increase in melt volumes with increasing crater size is that the depth of melting will also increase. This requires that shock effects occurring at the base of the cavity in simple craters and in the uplifted peaks of central structures at complex craters record progressively higher pressures with increasing crater size, up to a maximum of partial melting (approx. 45 GPa). Higher pressures cannot be recorded in the parautochthonous rocks of the cavity floor as they will be represented by impact melt, which will not remain in place. We have estimated maximum recorded pressures from a review of the literature, using such observations as planar features in quartz and feldspar, diaplectic glasses of feldspar and quartz, and partial fusion and vesiculation, as calibrated with estimates of the pressures required for their formation. Erosion complicates the picture by removing the surficial (most highly shocked) rocks in uplifted structures, thereby reducing the maximum shock pressures observed. In addition, the range of pressures that can be recorded is limited. Nevertheless, the data define a trend to higher recorded pressures with crater diameter, which is consistent with the implications of the model. A second implication is that, as the limit of melting intersects the base of the cavity, central topographic peaks will be modified in appearance and ultimately will not occur. That is, the peak will first develop a central depression, due to the flow of low-strength melted materials, when the melt volume begins to intersect the transient-cavity base.
NASA Astrophysics Data System (ADS)
Chen, Xinyuan; Gong, Xiaolin; Graff, Christian G.; Santana, Maira; Sturgeon, Gregory M.; Sauer, Thomas J.; Zeng, Rongping; Glick, Stephen J.; Lo, Joseph Y.
2017-03-01
While patient-based breast phantoms are realistic, they are limited by low resolution due to the image acquisition and segmentation process. The purpose of this study is to restore the high frequency components for the patient-based phantoms by adding power law noise (PLN) and breast structures generated based on mathematical models. First, 3D radial symmetric PLN with β=3 was added at the boundary between adipose and glandular tissue to connect broken tissue and create a high frequency contour of the glandular tissue. Next, selected high-frequency features from the FDA rule-based computational phantom (Cooper's ligaments, ductal network, and blood vessels) were fused into the phantom. The effects of enhancement in this study were demonstrated by 2D mammography projections and digital breast tomosynthesis (DBT) reconstruction volumes. The addition of PLN and rule-based models leads to a continuous decrease in β. The new β is 2.76, which is similar to what typically found for reconstructed DBT volumes. The new combined breast phantoms retain the realism from segmentation and gain higher resolution after restoration.
A study of the radiobiological modeling of the conformal radiation therapy in cancer treatment
NASA Astrophysics Data System (ADS)
Pyakuryal, Anil Prasad
Cancer is one of the leading causes of mortalities in the world. The precise diagnosis of the disease helps the patients to select the appropriate modality of the treatments such as surgery, chemotherapy and radiation therapy. The physics of X-radiation and the advanced imaging technologies such as positron emission tomography (PET) and computed tomography (CT) plays an important role in the efficient diagnosis and therapeutic treatments in cancer. However, the accuracy of the measurements of the metabolic target volumes (MTVs) in the PET/CT dual-imaging modality is always limited. Similarly the external beam radiation therapy (XRT) such as 3D conformal radiotherapy (3DCRT) and intensity modulated radiation therapy (IMRT) is the most common modality in the radiotherapy treatment. These treatments are simulated and evaluated using the XRT plans and the standard methodologies in the commercial planning system. However, the normal organs are always susceptible to the radiation toxicity in these treatments due to lack of knowledge of the appropriate radiobiological models to estimate the clinical outcomes. We explored several methodologies to estimate MTVs by reviewing various techniques of the target volume delineation using the static phantoms in the PET scans. The review suggests that the more precise and practical method of delineating PET MTV should be an intermediate volume between the volume coverage for the standardized uptake value (SUV; 2.5) of glucose and the 50% (40%) threshold of the maximum SUV for the smaller (larger) volume delineations in the radiotherapy applications. Similarly various types of optimal XRT plans were designed using the CT and PET/CT scans for the treatment of various types of cancer patients. The qualities of these plans were assessed using the universal plan-indices. The dose-volume criteria were also examined in the targets and organs by analyzing the conventional dose-volume histograms (DVHs). The biological models such as tumor control probability based on Poisson statistics model, and normal tissue complication probabilities based on Lyman-Kutcher-Burman model, were efficient to estimate the radiobiological outcomes of the treatments by taking into account of the dose-volume effects in the organs. Furthermore, a novel technique of spatial DVH analysis was also found to be useful to determine the primary cause of the complications in the critical organs in the treatments. The study also showed that the 3DCRT and IMRT techniques offer the promising results in the XRT treatment of the left-breast and the prostate cancer patients respectively. Unfortunately, several organs such as salivary glands and larynx, and esophagus, were found to be significantly vulnerable to the radiation toxicity in the treatment of the head and neck (HN), and left-lung cancer patients respectively. The radiobiological outcomes were also found to be consistent with the clinical results of the IMRT based treatments of a significant number of the HN cancer patients.
Estimation of the Viscosities of Liquid Sn-Based Binary Lead-Free Solder Alloys
NASA Astrophysics Data System (ADS)
Wu, Min; Li, Jinquan
2018-01-01
The viscosity of a binary Sn-based lead-free solder alloy was calculated by combining the predicted model with the Miedema model. The viscosity factor was proposed and the relationship between the viscosity and surface tension was analyzed as well. The investigation result shows that the viscosity of Sn-based lead-free solders predicted from the predicted model shows excellent agreement with the reported values. The viscosity factor is determined by three physical parameters: atomic volume, electronic density, and electro-negativity. In addition, the apparent correlation between the surface tension and viscosity of the binary Sn-based Pb-free solder was obtained based on the predicted model.
Levick, Shaun R; Hessenmöller, Dominik; Schulze, E-Detlef
2016-12-01
Monitoring and managing carbon stocks in forested ecosystems requires accurate and repeatable quantification of the spatial distribution of wood volume at landscape to regional scales. Grid-based forest inventory networks have provided valuable records of forest structure and dynamics at individual plot scales, but in isolation they may not represent the carbon dynamics of heterogeneous landscapes encompassing diverse land-management strategies and site conditions. Airborne LiDAR has greatly enhanced forest structural characterisation and, in conjunction with field-based inventories, it provides avenues for monitoring carbon over broader spatial scales. Here we aim to enhance the integration of airborne LiDAR surveying with field-based inventories by exploring the effect of inventory plot size and number on the relationship between field-estimated and LiDAR-predicted wood volume in deciduous broad-leafed forest in central Germany. Estimation of wood volume from airborne LiDAR was most robust (R 2 = 0.92, RMSE = 50.57 m 3 ha -1 ~14.13 Mg C ha -1 ) when trained and tested with 1 ha experimental plot data (n = 50). Predictions based on a more extensive (n = 1100) plot network with considerably smaller (0.05 ha) plots were inferior (R 2 = 0.68, RMSE = 101.01 ~28.09 Mg C ha -1 ). Differences between the 1 and 0.05 ha volume models from LiDAR were negligible however at the scale of individual land-management units. Sample size permutation tests showed that increasing the number of inventory plots above 350 for the 0.05 ha plots returned no improvement in R 2 and RMSE variability of the LiDAR-predicted wood volume model. Our results from this study confirm the utility of LiDAR for estimating wood volume in deciduous broad-leafed forest, but highlight the challenges associated with field plot size and number in establishing robust relationships between airborne LiDAR and field derived wood volume. We are moving into a forest management era where field-inventory and airborne LiDAR are inextricably linked, and we encourage field inventory campaigns to strive for increased plot size and give greater attention to precise stem geolocation for better integration with remote sensing strategies.
Lemola, Sakari; Oser, Nadine; Urfer-Maurer, Natalie; Brand, Serge; Holsboer-Trachsler, Edith; Bechtel, Nina; Grob, Alexander; Weber, Peter; Datta, Alexandre N
2017-01-01
To determine whether the relationship of gestational age (GA) with brain volumes and cognitive functions is linear or whether it follows a threshold model in preterm and term born children during school-age. We studied 106 children (M = 10 years 1 month, SD = 16 months; 40 females) enrolled in primary school: 57 were healthy very preterm children (10 children born 24-27 completed weeks' gestation (extremely preterm), 14 children born 28-29 completed weeks' gestation, 19 children born 30-31 completed weeks' gestation (very preterm), and 14 born 32 completed weeks' gestation (moderately preterm)) all born appropriate for GA (AGA) and 49 term-born children. Neuroimaging involved voxel-based morphometry with the statistical parametric mapping software. Cognitive functions were assessed with the WISC-IV. General Linear Models and multiple regressions were conducted controlling age, sex, and maternal education. Compared to groups of children born 30 completed weeks' gestation and later, children born <28 completed weeks' gestation had less gray matter volume (GMV) and white matter volume (WMV) and poorer cognitive functions including decreased full scale IQ, and processing speed. Differences in GMV partially mediated the relationship between GA and full scale IQ in preterm born children. In preterm children who are born AGA and without major complications GA is associated with brain volume and cognitive functions. In particular, decreased brain volume becomes evident in the extremely preterm group (born <28 completed weeks' gestation). In preterm children born 30 completed weeks' gestation and later the relationship of GA with brain volume and cognitive functions may be less strong as previously thought.
Normative morphometric data for cerebral cortical areas over the lifetime of the adult human brain.
Potvin, Olivier; Dieumegarde, Louis; Duchesne, Simon
2017-08-01
Proper normative data of anatomical measurements of cortical regions, allowing to quantify brain abnormalities, are lacking. We developed norms for regional cortical surface areas, thicknesses, and volumes based on cross-sectional MRI scans from 2713 healthy individuals aged 18 to 94 years using 23 samples provided by 21 independent research groups. The segmentation was conducted using FreeSurfer, a widely used and freely available automated segmentation software. Models predicting regional cortical estimates of each hemisphere were produced using age, sex, estimated total intracranial volume (eTIV), scanner manufacturer, magnetic field strength, and interactions as predictors. The explained variance for the left/right cortex was 76%/76% for surface area, 43%/42% for thickness, and 80%/80% for volume. The mean explained variance for all regions was 41% for surface areas, 27% for thicknesses, and 46% for volumes. Age, sex and eTIV predicted most of the explained variance for surface areas and volumes while age was the main predictors for thicknesses. Scanner characteristics generally predicted a limited amount of variance, but this effect was stronger for thicknesses than surface areas and volumes. For new individuals, estimates of their expected surface area, thickness and volume based on their characteristics and the scanner characteristics can be obtained using the derived formulas, as well as Z score effect sizes denoting the extent of the deviation from the normative sample. Models predicting normative values were validated in independent samples of healthy adults, showing satisfactory validation R 2 . Deviations from the normative sample were measured in individuals with mild Alzheimer's disease and schizophrenia and expected patterns of deviations were observed. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.
SU-G-201-15: Nomogram as an Efficient Dosimetric Verification Tool in HDR Prostate Brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, J; Todor, D
Purpose: Nomogram as a simple QA tool for HDR prostate brachytherapy treatment planning has been developed and validated clinically. Reproducibility including patient-to-patient and physician-to-physician variability was assessed. Methods: The study was performed on HDR prostate implants from physician A (n=34) and B (n=15) using different implant techniques and planning methodologies. A nomogram was implemented as an independent QA of computer-based treatment planning before plan execution. Normalized implant strength (total air kerma strength Sk*t in cGy cm{sup 2} divided by prescribed dose in cGy) was plotted as a function of PTV volume and total V100. A quadratic equation was used tomore » fit the data with R{sup 2} denoting the model predictive power. Results: All plans showed good target coverage while OARs met the dose constraint guidelines. Vastly different implant and planning styles were reflected on conformity index (entire dose matrix V100/PTV volume, physician A implants: 1.27±0.14, physician B: 1.47±0.17) and PTV V150/PTV volume ratio (physician A: 0.34±0.09, physician B: 0.24±0.07). The quadratic model provided a better fit for the curved relationship between normalized implant strength and total V100 (or PTV volume) than a simple linear function. Unlike the normalized implant strength versus PTV volume nomogram which differed between physicians, a unique quadratic model based nomogram (Sk*t)/D=−0.0008V2+0.0542V+1.1185 (R{sup 2}=0.9977) described the dependence of normalized implant strength on total V100 over all the patients from both physicians despite two different implant and planning philosophies. Normalized implant strength - total V100 model also generated less deviant points distorting the smoothed ones with a significantly higher correlation. Conclusion: A simple and universal, excel-based nomogram was created as an independent calculation tool for HDR prostate brachytherapy. Unlike similar attempts, our nomogram is insensitive to implant style and does not rely on reproducing dose calculations using TG-43 formalism, thus making it a truly independent check.« less
NASA Astrophysics Data System (ADS)
Daude, F.; Galon, P.
2018-06-01
A Finite-Volume scheme for the numerical computations of compressible single- and two-phase flows in flexible pipelines is proposed based on an approximate Godunov-type approach. The spatial discretization is here obtained using the HLLC scheme. In addition, the numerical treatment of abrupt changes in area and network including several pipelines connected at junctions is also considered. The proposed approach is based on the integral form of the governing equations making it possible to tackle general equations of state. A coupled approach for the resolution of fluid-structure interaction of compressible fluid flowing in flexible pipes is considered. The structural problem is solved using Euler-Bernoulli beam finite elements. The present Finite-Volume method is applied to ideal gas and two-phase steam-water based on the Homogeneous Equilibrium Model (HEM) in conjunction with a tabulated equation of state in order to demonstrate its ability to tackle general equations of state. The extensive application of the scheme for both shock tube and other transient flow problems demonstrates its capability to resolve such problems accurately and robustly. Finally, the proposed 1-D fluid-structure interaction model appears to be computationally efficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, B; He, W; Cvetkovic, D
Purpose: The purpose of the study is to compare the volume measurement of subcutaneous tumors in mice with different imaging platforms, namely a GE MRI and a Sofie-Biosciences small animal CT scanner. Methods: A549 human lung carcinoma cells and FaDu human head and neck squamous cell carcinoma cells were implanted subcutaneously into flanks of nude mice. Three FaDu tumors and three A549 tumors were included in this study. The MRI scans were done with a GE Signa 1.5 Tesla MR scanner using a fast T2-weighted sequence (70mm FOV and 1.2mm slice thickness), while the CT scans were done with themore » CT scanner on a Sofie-Biosciences G8 PET/CT platform dedicated for small animal studies (48mm FOV and 0.2mm slice thickness). Imaging contrast agent was not used in this study. Based on the DICOM images from MRI and CT scans, the tumors were contoured with Philips DICOM Viewer and the tumor volumes were obtained by summing up the contoured area and multiplied by the slice thickness. Results: The volume measurements based on the CT scans agree reasonably with that obtained with MR images for the subcutaneous tumors. The mean difference in the absolute tumor volumes between MRI- and CT-based measurements was found to be −6.2% ± 1.0%, with the difference defined as (VMR – VCT)*100%/VMR. Furthermore, we evaluated the normalized tumor volumes, which were defined for each tumor as V/V{sub 0} where V{sub 0} stands for the volume from the first MR or CT scan. The mean difference in the normalized tumor volumes was found to be 0.10% ± 0.96%. Conclusion: Despite the fact that the difference between normal and abnormal tissues is often less clear on small animal CT images than on MR images, one can still obtain reasonable tumor volume information with the small animal CT scans for subcutaneous murine xenograft models.« less
Seemann, M D; Gebicke, K; Luboldt, W; Albes, J M; Vollmar, J; Schäfer, J F; Beinert, T; Englmeier, K H; Bitzer, M; Claussen, C D
2001-07-01
The aim of this study was to demonstrate the possibilities of a hybrid rendering method, the combination of a color-coded surface and volume rendering method, with the feasibility of performing surface-based virtual endoscopy with different representation models in the operative and interventional therapy control of the chest. In 6 consecutive patients with partial lung resection (n = 2) and lung transplantation (n = 4) a thin-section spiral computed tomography of the chest was performed. The tracheobronchial system and the introduced metallic stents were visualized using a color-coded surface rendering method. The remaining thoracic structures were visualized using a volume rendering method. For virtual bronchoscopy, the tracheobronchial system was visualized using a triangle surface model, a shaded-surface model and a transparent shaded-surface model. The hybrid 3D visualization uses the advantages of both the color-coded surface and volume rendering methods and facilitates a clear representation of the tracheobronchial system and the complex topographical relationship of morphological and pathological changes without loss of diagnostic information. Performing virtual bronchoscopy with the transparent shaded-surface model facilitates a reasonable to optimal, simultaneous visualization and assessment of the surface structure of the tracheobronchial system and the surrounding mediastinal structures and lesions. Hybrid rendering relieve the morphological assessment of anatomical and pathological changes without the need for time-consuming detailed analysis and presentation of source images. Performing virtual bronchoscopy with a transparent shaded-surface model offers a promising alternative to flexible fiberoptic bronchoscopy.
Role of ion hydration for the differential capacitance of an electric double layer.
Caetano, Daniel L Z; Bossa, Guilherme V; de Oliveira, Vinicius M; Brown, Matthew A; de Carvalho, Sidney J; May, Sylvio
2016-10-12
The influence of soft, hydration-mediated ion-ion and ion-surface interactions on the differential capacitance of an electric double layer is investigated using Monte Carlo simulations and compared to various mean-field models. We focus on a planar electrode surface at physiological concentration of monovalent ions in a uniform dielectric background. Hydration-mediated interactions are modeled on the basis of Yukawa potentials that add to the Coulomb and excluded volume interactions between ions. We present a mean-field model that includes hydration-mediated anion-anion, anion-cation, and cation-cation interactions of arbitrary strengths. In addition, finite ion sizes are accounted for through excluded volume interactions, described either on the basis of the Carnahan-Starling equation of state or using a lattice gas model. Both our Monte Carlo simulations and mean-field approaches predict a characteristic double-peak (the so-called camel shape) of the differential capacitance; its decrease reflects the packing of the counterions near the electrode surface. The presence of hydration-mediated ion-surface repulsion causes a thin charge-depleted region close to the surface, which is reminiscent of a Stern layer. We analyze the interplay between excluded volume and hydration-mediated interactions on the differential capacitance and demonstrate that for small surface charge density our mean-field model based on the Carnahan-Starling equation is able to capture the Monte Carlo simulation results. In contrast, for large surface charge density the mean-field approach based on the lattice gas model is preferable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey D. Evanseck; Jeffry D. Madura; Jonathan P. Mathews
2006-04-21
Molecular modeling was employed to both visualize and probe our understanding of carbon dioxide sequestration within a bituminous coal. A large-scale (>20,000 atoms) 3D molecular representation of Pocahontas No. 3 coal was generated. This model was constructed based on a the review data of Stock and Muntean, oxidation and decarboxylation data for aromatic clustersize frequency of Stock and Obeng, and the combination of Laser Desorption Mass Spectrometry data with HRTEM, enabled the inclusion of a molecular weight distribution. The model contains 21,931 atoms, with a molecular mass of 174,873 amu, and an average molecular weight of 714 amu, with 201more » structural components. The structure was evaluated based on several characteristics to ensure a reasonable constitution (chemical and physical representation). The helium density of Pocahontas No. 3 coal is 1.34 g/cm{sup 3} (dmmf) and the model was 1.27 g/cm{sup 3}. The structure is microporous, with a pore volume comprising 34% of the volume as expected for a coal of this rank. The representation was used to visualize CO{sub 2}, and CH{sub 4} capacity, and the role of moisture in swelling and CO{sub 2}, and CH{sub 4} capacity reduction. Inclusion of 0.68% moisture by mass (ash-free) enabled the model to swell by 1.2% (volume). Inclusion of CO{sub 2} enabled volumetric swelling of 4%.« less
NASA Astrophysics Data System (ADS)
Karimi-Fard, M.; Durlofsky, L. J.
2016-10-01
A comprehensive framework for modeling flow in porous media containing thin, discrete features, which could be high-permeability fractures or low-permeability deformation bands, is presented. The key steps of the methodology are mesh generation, fine-grid discretization, upscaling, and coarse-grid discretization. Our specialized gridding technique combines a set of intersecting triangulated surfaces by constructing approximate intersections using existing edges. This procedure creates a conforming mesh of all surfaces, which defines the internal boundaries for the volumetric mesh. The flow equations are discretized on this conforming fine mesh using an optimized two-point flux finite-volume approximation. The resulting discrete model is represented by a list of control-volumes with associated positions and pore-volumes, and a list of cell-to-cell connections with associated transmissibilities. Coarse models are then constructed by the aggregation of fine-grid cells, and the transmissibilities between adjacent coarse cells are obtained using flow-based upscaling procedures. Through appropriate computation of fracture-matrix transmissibilities, a dual-continuum representation is obtained on the coarse scale in regions with connected fracture networks. The fine and coarse discrete models generated within the framework are compatible with any connectivity-based simulator. The applicability of the methodology is illustrated for several two- and three-dimensional examples. In particular, we consider gas production from naturally fractured low-permeability formations, and transport through complex fracture networks. In all cases, highly accurate solutions are obtained with significant model reduction.
A Simple Analytical Model for Magnetization and Coercivity of Hard/Soft Nanocomposite Magnets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Jihoon; Hong, Yang-Ki; Lee, Woncheol
Here, we present a simple analytical model to estimate the magnetization (σ s) and intrinsic coercivity (Hci) of a hard/soft nanocomposite magnet using the mass fraction. Previously proposed models are based on the volume fraction of the hard phase of the composite. But, it is difficult to measure the volume of the hard or soft phase material of a composite. We synthesized Sm 2Co 7/Fe-Co, MnAl/Fe-Co, MnBi/Fe-Co, and BaFe 12O 19/Fe-Co composites for characterization of their σs and Hci. The experimental results are in good agreement with the present model. Therefore, this analytical model can be extended to predict themore » maximum energy product (BH) max of hard/soft composite.« less
Surface tension models for a multi-material ALE code with AMR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wangyi; Koniges, Alice; Gott, Kevin
A number of surface tension models have been implemented in a 3D multi-physics multi-material code, ALE–AMR, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR). ALE–AMR is unique in its ability to model hot radiating plasmas, cold fragmenting solids, and most recently, the deformation of molten material. The surface tension models implemented include a diffuse interface approach with special numerical techniques to remove parasitic flow and a height function approach in conjunction with a volume-fraction interface reconstruction package. These surface tension models are benchmarked with a variety of test problems. In conclusion, based on the results, themore » height function approach using volume fractions was chosen to simulate droplet dynamics associated with extreme ultraviolet (EUV) lithography.« less
A Simple Analytical Model for Magnetization and Coercivity of Hard/Soft Nanocomposite Magnets
Park, Jihoon; Hong, Yang-Ki; Lee, Woncheol; ...
2017-07-10
Here, we present a simple analytical model to estimate the magnetization (σ s) and intrinsic coercivity (Hci) of a hard/soft nanocomposite magnet using the mass fraction. Previously proposed models are based on the volume fraction of the hard phase of the composite. But, it is difficult to measure the volume of the hard or soft phase material of a composite. We synthesized Sm 2Co 7/Fe-Co, MnAl/Fe-Co, MnBi/Fe-Co, and BaFe 12O 19/Fe-Co composites for characterization of their σs and Hci. The experimental results are in good agreement with the present model. Therefore, this analytical model can be extended to predict themore » maximum energy product (BH) max of hard/soft composite.« less
Predictions of Poisson's ratio in cross-ply laminates containing matrix cracks and delaminations
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Allen, David H.; Nottorf, Eric W.
1989-01-01
A damage-dependent constitutive model for laminated composites has been developed for the combined damage modes of matrix cracks and delaminations. The model is based on the concept of continuum damage mechanics and uses second-order tensor valued internal state variables to represent each mode of damage. The internal state variables are defined as the local volume average of the relative crack face displacements. Since the local volume for delaminations is specified at the laminate level, the constitutive model takes the form of laminate analysis equations modified by the internal state variables. Model implementation is demonstrated for the laminate engineering modulus E(x) and Poisson's ratio nu(xy) of quasi-isotropic and cross-ply laminates. The model predictions are in close agreement to experimental results obtained for graphite/epoxy laminates.
Surface tension models for a multi-material ALE code with AMR
Liu, Wangyi; Koniges, Alice; Gott, Kevin; ...
2017-06-01
A number of surface tension models have been implemented in a 3D multi-physics multi-material code, ALE–AMR, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR). ALE–AMR is unique in its ability to model hot radiating plasmas, cold fragmenting solids, and most recently, the deformation of molten material. The surface tension models implemented include a diffuse interface approach with special numerical techniques to remove parasitic flow and a height function approach in conjunction with a volume-fraction interface reconstruction package. These surface tension models are benchmarked with a variety of test problems. In conclusion, based on the results, themore » height function approach using volume fractions was chosen to simulate droplet dynamics associated with extreme ultraviolet (EUV) lithography.« less
NASA Astrophysics Data System (ADS)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
Age group classification and gender detection based on forced expiratory spirometry.
Cosgun, Sema; Ozbek, I Yucel
2015-08-01
This paper investigates the utility of forced expiratory spirometry (FES) test with efficient machine learning algorithms for the purpose of gender detection and age group classification. The proposed method has three main stages: feature extraction, training of the models and detection. In the first stage, some features are extracted from volume-time curve and expiratory flow-volume loop obtained from FES test. In the second stage, the probabilistic models for each gender and age group are constructed by training Gaussian mixture models (GMMs) and Support vector machine (SVM) algorithm. In the final stage, the gender (or age group) of test subject is estimated by using the trained GMM (or SVM) model. Experiments have been evaluated on a large database from 4571 subjects. The experimental results show that average correct classification rate performance of both GMM and SVM methods based on the FES test is more than 99.3 % and 96.8 % for gender and age group classification, respectively.
An implicit numerical model for multicomponent compressible two-phase flow in porous media
NASA Astrophysics Data System (ADS)
Zidane, Ali; Firoozabadi, Abbas
2015-11-01
We introduce a new implicit approach to model multicomponent compressible two-phase flow in porous media with species transfer between the phases. In the implicit discretization of the species transport equation in our formulation we calculate for the first time the derivative of the molar concentration of component i in phase α (cα, i) with respect to the total molar concentration (ci) under the conditions of a constant volume V and temperature T. The species transport equation is discretized by the finite volume (FV) method. The fluxes are calculated based on powerful features of the mixed finite element (MFE) method which provides the pressure at grid-cell interfaces in addition to the pressure at the grid-cell center. The efficiency of the proposed model is demonstrated by comparing our results with three existing implicit compositional models. Our algorithm has low numerical dispersion despite the fact it is based on first-order space discretization. The proposed algorithm is very robust.
Diffusion of multiple species with excluded-volume effects.
Bruna, Maria; Chapman, S Jonathan
2012-11-28
Stochastic models of diffusion with excluded-volume effects are used to model many biological and physical systems at a discrete level. The average properties of the population may be described by a continuum model based on partial differential equations. In this paper we consider multiple interacting subpopulations/species and study how the inter-species competition emerges at the population level. Each individual is described as a finite-size hard core interacting particle undergoing brownian motion. The link between the discrete stochastic equations of motion and the continuum model is considered systematically using the method of matched asymptotic expansions. The system for two species leads to a nonlinear cross-diffusion system for each subpopulation, which captures the enhancement of the effective diffusion rate due to excluded-volume interactions between particles of the same species, and the diminishment due to particles of the other species. This model can explain two alternative notions of the diffusion coefficient that are often confounded, namely collective diffusion and self-diffusion. Simulations of the discrete system show good agreement with the analytic results.
Inception and variability of the Antarctic ice sheet across the Eocene-Oligocene transition
NASA Astrophysics Data System (ADS)
Stocchi, Paolo; Galeotti, Simone; Ladant, Jan-Baptiste; DeConto, Robert; Vermeersen, Bert; Rugenstein, Maria
2014-05-01
Climate cooling throughout middle to late Eocene (~48 - 34 Million years ago, Ma) triggered the transition from hot-house to ice-house conditions. Based on deep-sea marine δ18O values, a continental-scale Antarctic Ice Sheet (AIS) rapidly developed across the Eocene-Oligocene transition (EOT) in two ~200 kyr-spaced phases between 34.0 - 33.5 Ma. Regardless of the geographical configuration of southern ocean gateways, geochemical data and ice-sheet modelling show that AIS glaciation initiated as atmospheric CO2 fell below ~2.5 times pre-industrial values. AIS likely reached or even exceeded present-day dimensions. Quantifying the magnitude and timing of AIS volume variations by means of δ18O records is hampered by the fact that the latter reflect a coupled signal of temperature and ice-sheet volume. Besides, bathymetric variations based on marine geologic sections are affected by large uncertainties and, most importantly, reflect the local response of relative sea level (rsl) to ice volume fluctuations rather than the global eustatic signal. AIS proximal and Northern Hemisphere (NH) marine settings show an opposite trend of rsl change across the EOT. In fact, consistently with central values based on δ18O records, an 60 ± 20m rsl drop is estimated from NH low-latitude shallow marine sequences. Conversely, sedimentary facies from shallow shelfal areas in the proximity of the AIS witness an 50 - 150m rsl rise across the EOT. Accounting for ice-load-induced crustal and geoidal deformations and for the mutual gravitational attraction between the growing AIS and the ocean water is a necessary requirement to reconcile near- and far-field rsl sites, regardless of tectonics and of any other possible local contamination. In this work we investigate the AIS inception and variability across the EOT by combining the observed rsl changes with predictions based on numerical modeling of Glacial Isostatic Adjustment (GIA). We solve the gravitationally self-consistent Sea Level Equation for two different and independent AIS models both driven by atmospheric CO2 variations and evolving on different Antarctic topographies. In particular, minimum and maximum AIS volumes, respectively of ~55m and ~70m equivalent sea level (esl), stem from a smaller and a larger Antarctic topography. Minimum and maximum GIA predictions at the NH rsl sites respectively correspond to the lower limit and central value of the EOT rsl drop inferred from geological data. For both GIA models, the departures from the eustatic trend significantly increase southward toward Antarctica, where the AIS growth is accompanied by a rsl rise. Accordingly, the cyclochronological record of sedimentary cycles retrieved from Cape Roberts Project Drillcore CRP-3 (Victoria Land Basin) witness a deepening across the EOT. Most importantly, CRP-3 record shows that full glacial conditions consistent with the maximum AIS model dimensions were reached only at ~32.8 Ma, while ice-sheet volumes fluctuations around the minimum AIS model volume persisted during the first million years of glaciation.
NASA Astrophysics Data System (ADS)
Nasri, S.; Cudennec, C.; Albergel, J.; Berndtsson, R.
2004-02-01
In the beginning of the 1990s, the Tunisian Ministry of Agriculture launched an ambitious program for constructing small hillside reservoirs in the northern and central region of the country. At present, more than 720 reservoirs have been created. They consist of small compacted earth dams supplied with a horizontal overflow weir. Due to lack of hydrological data and the area's extreme floods, however, it is very difficult to design the overflow weirs. Also, catchments are very sensitive to erosion and the reservoirs are rapidly silted up. Consequently, prediction of flood volumes for important rainfall events becomes crucial. Few hydrological observations, however, exist for the catchment areas. For this purpose a geomorphological model methodology is presented to predict shape and volume of hydrographs for important floods. This model is built around a production function that defines the net storm rainfall (portion of rainfall during a storm which reaches a stream channel as direct runoff) from the total rainfall (observed rainfall in the catchment) and a transfer function based on the most complete possible definition of the surface drainage system. Observed rainfall during 5-min time steps was used in the model. The model runoff generation is based on surface drainage characteristics which can be easily extracted from maps. The model was applied to two representative experimental catchments in central Tunisia. The conceptual rainfall-runoff model based on surface topography and drainage network was seen to reproduce observed runoff satisfactory. The calibrated model was used to estimate runoff from 5, 10, 20, and 50 year rainfall return periods regarding runoff volume, maximum runoff, as well as the general shape of the runoff hydrograph. Practical conclusions to design hill reservoirs and to extrapolate results using this model methodology for ungauged small catchments in semiarid Tunisia are made.
NASA Astrophysics Data System (ADS)
Benedek, Judit; Papp, Gábor; Kalmár, János
2018-04-01
Beyond rectangular prism polyhedron, as a discrete volume element, can also be used to model the density distribution inside 3D geological structures. The calculation of the closed formulae given for the gravitational potential and its higher-order derivatives, however, needs twice more runtime than that of the rectangular prism computations. Although the more detailed the better principle is generally accepted it is basically true only for errorless data. As soon as errors are present any forward gravitational calculation from the model is only a possible realization of the true force field on the significance level determined by the errors. So if one really considers the reliability of input data used in the calculations then sometimes the "less" can be equivalent to the "more" in statistical sense. As a consequence the processing time of the related complex formulae can be significantly reduced by the optimization of the number of volume elements based on the accuracy estimates of the input data. New algorithms are proposed to minimize the number of model elements defined both in local and in global coordinate systems. Common gravity field modelling programs generate optimized models for every computation points ( dynamic approach), whereas the static approach provides only one optimized model for all. Based on the static approach two different algorithms were developed. The grid-based algorithm starts with the maximum resolution polyhedral model defined by 3-3 points of each grid cell and generates a new polyhedral surface defined by points selected from the grid. The other algorithm is more general; it works also for irregularly distributed data (scattered points) connected by triangulation. Beyond the description of the optimization schemes some applications of these algorithms in regional and local gravity field modelling are presented too. The efficiency of the static approaches may provide even more than 90% reduction in computation time in favourable situation without the loss of reliability of the calculated gravity field parameters.
Mathematical modeling of fluid-electrolyte alterations during weightlessness
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1984-01-01
Fluid electrolyte metabolism and renal endocrine control as it pertains to adaptation to weightlessness were studied. The mathematical models that have been particularly useful are discussed. However, the focus of the report is on the physiological meaning of the computer studies. A discussion of the major ground based analogs of weightlessness are included; for example, head down tilt, water immersion, and bed rest, and a comparison of findings. Several important zero g phenomena are described, including acute fluid volume regulation, blood volume regulation, circulatory changes, longer term fluid electrolyte adaptations, hormonal regulation, and body composition changes. Hypotheses are offered to explain the major findings in each area and these are integrated into a larger hypothesis of space flight adaptation. A conceptual foundation for fluid electrolyte metabolism, blood volume regulation, and cardiovascular regulation is reported.
Kaminsky, Jan; Rodt, Thomas; Gharabaghi, Alireza; Forster, Jan; Brand, Gerd; Samii, Madjid
2005-06-01
The FE-modeling of complex anatomical structures is not solved satisfyingly so far. Voxel-based as opposed to contour-based algorithms allow an automated mesh generation based on the image data. Nonetheless their geometric precision is limited. We developed an automated mesh-generator that combines the advantages of voxel-based generation with improved representation of the geometry by displacement of nodes on the object-surface. Models of an artificial 3D-pipe-section and a skullbase were generated with different mesh-densities using the newly developed geometric, unsmoothed and smoothed voxel generators. Compared to the analytic calculation of the 3D-pipe-section model the normalized RMS error of the surface stress was 0.173-0.647 for the unsmoothed voxel models, 0.111-0.616 for the smoothed voxel models with small volume error and 0.126-0.273 for the geometric models. The highest element-energy error as a criterion for the mesh quality was 2.61x10(-2) N mm, 2.46x10(-2) N mm and 1.81x10(-2) N mm for unsmoothed, smoothed and geometric voxel models, respectively. The geometric model of the 3D-skullbase resulted in the lowest element-energy error and volume error. This algorithm also allowed the best representation of anatomical details. The presented geometric mesh-generator is universally applicable and allows an automated and accurate modeling by combining the advantages of the voxel-technique and of improved surface-modeling.
NASA Astrophysics Data System (ADS)
Kim, P.; Choi, Y.; Ghim, Y. S.
2016-12-01
Both sunphotometer (Cimel, CE-318) and skyradiometer (Prede, POM-02) were operated in May, 2015 as a part of the Megacity Air Pollution Studies-Seoul (MAPS-Seoul) campaign. These instruments were collocated at the Hankuk University of Foreign Studies (Hankuk_UFS) site of AErosol RObotic NETwork (AERONET) and the Yongin (YGN) site of SKYradiometer NETwork (SKYNET). The aerosol volume size distribution at the surface was measured using a wide range aerosol spectrometer (WRAS) system consisting of a scanning mobility particle sizer (Grimm, Model 5.416; 45 bins, 0.01-1.09 μm) and an optical particle counter (Grimm, Model 1.109; 31 bins, 0.27-34 μm). The measurement site (37.34oN, 127.27oE, 167 m above sea level) is located about 35 km southeast of downtown Seoul. To investigate the discrepancies in volume concentrations, effective diameters and fine mode volume fractions, we compared the volume size distributions from sunphotometer, skyradiometer, and WRAS system when the measurement time coincided within 5 minutes considering that the measurement intervals were different between instruments.
An object-oriented computational model to study cardiopulmonary hemodynamic interactions in humans.
Ngo, Chuong; Dahlmanns, Stephan; Vollmer, Thomas; Misgeld, Berno; Leonhardt, Steffen
2018-06-01
This work introduces an object-oriented computational model to study cardiopulmonary interactions in humans. Modeling was performed in object-oriented programing language Matlab Simscape, where model components are connected with each other through physical connections. Constitutive and phenomenological equations of model elements are implemented based on their non-linear pressure-volume or pressure-flow relationship. The model includes more than 30 physiological compartments, which belong either to the cardiovascular or respiratory system. The model considers non-linear behaviors of veins, pulmonary capillaries, collapsible airways, alveoli, and the chest wall. Model parameters were derisved based on literature values. Model validation was performed by comparing simulation results with clinical and animal data reported in literature. The model is able to provide quantitative values of alveolar, pleural, interstitial, aortic and ventricular pressures, as well as heart and lung volumes during spontaneous breathing and mechanical ventilation. Results of baseline simulation demonstrate the consistency of the assigned parameters. Simulation results during mechanical ventilation with PEEP trials can be directly compared with animal and clinical data given in literature. Object-oriented programming languages can be used to model interconnected systems including model non-linearities. The model provides a useful tool to investigate cardiopulmonary activity during spontaneous breathing and mechanical ventilation. Copyright © 2018 Elsevier B.V. All rights reserved.
Preoperative computer simulation for planning of vascular access surgery in hemodialysis patients.
Zonnebeld, Niek; Huberts, Wouter; van Loon, Magda M; Delhaas, Tammo; Tordoir, Jan H M
2017-03-06
The arteriovenous fistula (AVF) is the preferred vascular access for hemodialysis patients. Unfortunately, 20-40% of all constructed AVFs fail to mature (FTM), and are therefore not usable for hemodialysis. AVF maturation importantly depends on postoperative blood volume flow. Predicting patient-specific immediate postoperative flow could therefore support surgical planning. A computational model predicting blood volume flow is available, but the effect of blood flow predictions on the clinical endpoint of maturation (at least 500 mL/min blood volume flow, diameter of the venous cannulation segment ≥4 mm) remains undetermined. A multicenter randomized clinical trial will be conducted in which 372 patients will be randomized (1:1 allocation ratio) between conventional healthcare and computational model-aided decision making. All patients are extensively examined using duplex ultrasonography (DUS) during preoperative assessment (12 venous and 11 arterial diameter measurements; 3 arterial volume flow measurements). The computational model will predict patient-specific immediate postoperative blood volume flows based on this DUS examination. Using these predictions, the preferred AVF configuration is recommended for the individual patient (radiocephalic, brachiocephalic, or brachiobasilic). The primary endpoint is FTM rate at six weeks in both groups, secondary endpoints include AVF functionality and patency rates at 6 and 12 months postoperatively. ClinicalTrials.gov (NCT02453412), and ToetsingOnline.nl (NL51610.068.14).
Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M
2014-12-01
To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers (n = 100) scored overall image quality as sufficient or good with MBIR model-based iterative reconstruction in 99% (99 of 100). Liver SNR signal-to-noise ratio was significantly greater for MBIR model-based iterative reconstruction (10.8 ± 2.5 [standard deviation] vs 7.7 ± 1.4, P < .001); there was no difference for CNR contrast-to-noise ratio (2.5 ± 1.4 vs 2.4 ± 1.4, P = .45). For ASIR adaptive statistical iterative reconstruction and MBIR model-based iterative reconstruction , respectively, volume CT dose index was 15.2 mGy ± 7.6 versus 6.2 mGy ± 3.6; SSDE size-specific dose estimate was 16.4 mGy ± 6.6 versus 6.7 mGy ± 3.1 (P < .001). Liver CT images reconstructed with MBIR model-based iterative reconstruction may allow up to 59% radiation dose reduction compared with the dose with ASIR adaptive statistical iterative reconstruction , without compromising depiction of findings or image quality. © RSNA, 2014.
Bivariate analysis of floods in climate impact assessments.
Brunner, Manuela Irene; Sikorska, Anna E; Seibert, Jan
2018-03-01
Climate impact studies regarding floods usually focus on peak discharges and a bivariate assessment of peak discharges and hydrograph volumes is not commonly included. A joint consideration of peak discharges and hydrograph volumes, however, is crucial when assessing flood risks for current and future climate conditions. Here, we present a methodology to develop synthetic design hydrographs for future climate conditions that jointly consider peak discharges and hydrograph volumes. First, change factors are derived based on a regional climate model and are applied to observed precipitation and temperature time series. Second, the modified time series are fed into a calibrated hydrological model to simulate runoff time series for future conditions. Third, these time series are used to construct synthetic design hydrographs. The bivariate flood frequency analysis used in the construction of synthetic design hydrographs takes into account the dependence between peak discharges and hydrograph volumes, and represents the shape of the hydrograph. The latter is modeled using a probability density function while the dependence between the design variables peak discharge and hydrograph volume is modeled using a copula. We applied this approach to a set of eight mountainous catchments in Switzerland to construct catchment-specific and season-specific design hydrographs for a control and three scenario climates. Our work demonstrates that projected climate changes have an impact not only on peak discharges but also on hydrograph volumes and on hydrograph shapes both at an annual and at a seasonal scale. These changes are not necessarily proportional which implies that climate impact assessments on future floods should consider more flood characteristics than just flood peaks. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Charco, M.; Rodriguez Molina, S.; Gonzalez, P. J.; Negredo, A. M.; Poland, M. P.; Schmidt, D. A.
2017-12-01
The Three Sisters volcanic region Oregon (USA) is one of the most active volcanic areas in the Cascade Range and is densely populated with eruptive vents. An extensive area just west of South Sister volcano has been actively uplifting since about 1998. InSAR data from 1992 through 2001 showed an uplift rate in the area of 3-4 cm/yr. Then the deformation rate considerably decreased between 2004 and 2006 as shown by both InSAR and continuous GPS measurements. Once magmatic system geometry and location are determined, a linear inversion of all GPS and InSAR data available is performed in order to estimate the volume changes of the source along the analyzed time interval. For doing so, we applied a technique based on the Truncated Singular Value Decomposition (TSVD) of the Green's function matrix representing the linear inversion. Here, we develop a strategy to provide a cut-off for truncation removing the smallest singular values without too much loose of data resolution against the stability of the method. Furthermore, the strategy will give us a quantification of the uncertainty of the volume change time series. The strength of the methodology resides in allowing the joint inversion of InSAR measurements from multiple tracks with different look angles and three component GPS measurements from multiple sites.Finally, we analyze the temporal behavior of the source volume changes using a new analytical model that describes the process of injecting magma into a reservoir surrounded by a viscoelastic shell. This dynamic model is based on Hagen-Poiseuille flow through a vertical conduit that leads to an increase in pressure within a spherical reservoir and time-dependent surface deformation. The volume time series are compared to predictions from the dynamic model to constrain model parameters, namely characteristic Poiseuille and Maxwell time scales, inlet and outlet injection pressure, and source and shell geometries. The modeling approach used here could be used to develop a mathematically rigorous strategy for including time-series deformation data in the interpretation of volcanic unrest.
NASA Astrophysics Data System (ADS)
Yu, H.; Gu, H.
2017-12-01
A novel multivariate seismic formation pressure prediction methodology is presented, which incorporates high-resolution seismic velocity data from prestack AVO inversion, and petrophysical data (porosity and shale volume) derived from poststack seismic motion inversion. In contrast to traditional seismic formation prediction methods, the proposed methodology is based on a multivariate pressure prediction model and utilizes a trace-by-trace multivariate regression analysis on seismic-derived petrophysical properties to calibrate model parameters in order to make accurate predictions with higher resolution in both vertical and lateral directions. With prestack time migration velocity as initial velocity model, an AVO inversion was first applied to prestack dataset to obtain high-resolution seismic velocity with higher frequency that is to be used as the velocity input for seismic pressure prediction, and the density dataset to calculate accurate Overburden Pressure (OBP). Seismic Motion Inversion (SMI) is an inversion technique based on Markov Chain Monte Carlo simulation. Both structural variability and similarity of seismic waveform are used to incorporate well log data to characterize the variability of the property to be obtained. In this research, porosity and shale volume are first interpreted on well logs, and then combined with poststack seismic data using SMI to build porosity and shale volume datasets for seismic pressure prediction. A multivariate effective stress model is used to convert velocity, porosity and shale volume datasets to effective stress. After a thorough study of the regional stratigraphic and sedimentary characteristics, a regional normally compacted interval model is built, and then the coefficients in the multivariate prediction model are determined in a trace-by-trace multivariate regression analysis on the petrophysical data. The coefficients are used to convert velocity, porosity and shale volume datasets to effective stress and then to calculate formation pressure with OBP. Application of the proposed methodology to a research area in East China Sea has proved that the method can bridge the gap between seismic and well log pressure prediction and give predicted pressure values close to pressure meassurements from well testing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, SP; Quon, H; Cheng, Z
2015-06-15
Purpose: To extend the capabilities of knowledge-based treatment planning beyond simple dose queries by incorporating validated patient outcome models. Methods: From an analytic, relational database of 684 head and neck cancer patients, 372 patients were identified having dose data for both left and right parotid glands as well as baseline and follow-up xerostomia assessments. For each existing patient, knowledge-based treatment planning was simulated for by querying the dose-volume histograms and geometric shape relationships (overlap volume histograms) for all other patients. Dose predictions were captured at normalized volume thresholds (NVT) of 0%, 10%, 20, 30%, 40%, 50%, and 85% and weremore » compared with the actual achieved doses using the Wilcoxon signed-rank test. Next, a logistic regression model was used to predict the maximum severity of xerostomia up to three months following radiotherapy. Baseline xerostomia scores were subtracted from follow-up assessments and were also included in the model. The relative risks from predicted doses and actual doses were computed and compared. Results: The predicted doses for both parotid glands were significantly less than the achieved doses (p < 0.0001), with differences ranging from 830 cGy ± 1270 cGy (0% NVT) to 1673 cGy ± 1197 cGy (30% NVT). The modelled risk of xerostomia ranged from 54% to 64% for achieved doses and from 33% to 51% for the dose predictions. Relative risks varied from 1.24 to 1.87, with maximum relative risk occurring at 85% NVT. Conclusions: Data-driven generation of treatment planning objectives without consideration of the underlying normal tissue complication probability may Result in inferior plans, even if quality metrics indicate otherwise. Inclusion of complication models in knowledge-based treatment planning is necessary in order to close the feedback loop between radiotherapy treatments and patient outcomes. Future work includes advancing and validating complication models in the context of knowledge-based treatment planning. This work is supported by Philips Radiation Oncology Systems.« less
Sharabi, Shirley; Kos, Bor; Last, David; Guez, David; Daniels, Dianne; Harnof, Sagi; Miklavcic, Damijan
2016-01-01
Background Electroporation-based therapies such as electrochemotherapy (ECT) and irreversible electroporation (IRE) are emerging as promising tools for treatment of tumors. When applied to the brain, electroporation can also induce transient blood-brain-barrier (BBB) disruption in volumes extending beyond IRE, thus enabling efficient drug penetration. The main objective of this study was to develop a statistical model predicting cell death and BBB disruption induced by electroporation. This model can be used for individual treatment planning. Material and methods Cell death and BBB disruption models were developed based on the Peleg-Fermi model in combination with numerical models of the electric field. The model calculates the electric field thresholds for cell kill and BBB disruption and describes the dependence on the number of treatment pulses. The model was validated using in vivo experimental data consisting of rats brains MRIs post electroporation treatments. Results Linear regression analysis confirmed that the model described the IRE and BBB disruption volumes as a function of treatment pulses number (r2 = 0.79; p < 0.008, r2 = 0.91; p < 0.001). The results presented a strong plateau effect as the pulse number increased. The ratio between complete cell death and no cell death thresholds was relatively narrow (between 0.88-0.91) even for small numbers of pulses and depended weakly on the number of pulses. For BBB disruption, the ratio increased with the number of pulses. BBB disruption radii were on average 67% ± 11% larger than IRE volumes. Conclusions The statistical model can be used to describe the dependence of treatment-effects on the number of pulses independent of the experimental setup. PMID:27069447
United States Air Force Summer Faculty Research Program (1983). Technical Report. Volume 2
1983-12-01
filters are given below: (1) Inverse filter - Based on the model given in Eq. (2) and the criterion of minimizing the norm (i.e., power ) of the...and compared based on their performances In machine classification under a variety of blur and noise conditions. These filters are analyzed to...criteria based on various assumptions of the Image models* In practice filter performance varies with the type of image, the blur and the noise conditions
NASA Astrophysics Data System (ADS)
Gulliver, John; de Hoogh, Kees; Fecht, Daniela; Vienneau, Danielle; Briggs, David
2011-12-01
The development of geographical information system techniques has opened up a wide array of methods for air pollution exposure assessment. The extent to which these provide reliable estimates of air pollution concentrations is nevertheless not clearly established. Nor is it clear which methods or metrics should be preferred in epidemiological studies. This paper compares the performance of ten different methods and metrics in terms of their ability to predict mean annual PM 10 concentrations across 52 monitoring sites in London, UK. Metrics analysed include indicators (distance to nearest road, traffic volume on nearest road, heavy duty vehicle (HDV) volume on nearest road, road density within 150 m, traffic volume within 150 m and HDV volume within 150 m) and four modelling approaches: based on the nearest monitoring site, kriging, dispersion modelling and land use regression (LUR). Measures were computed in a GIS, and resulting metrics calibrated and validated against monitoring data using a form of grouped jack-knife analysis. The results show that PM 10 concentrations across London show little spatial variation. As a consequence, most methods can predict the average without serious bias. Few of the approaches, however, show good correlations with monitored PM 10 concentrations, and most predict no better than a simple classification based on site type. Only land use regression reaches acceptable levels of correlation ( R2 = 0.47), though this can be improved by also including information on site type. This might therefore be taken as a recommended approach in many studies, though care is needed in developing meaningful land use regression models, and like any method they need to be validated against local data before their application as part of epidemiological studies.
Maxwell, M; Howie, J G; Pryde, C J
1998-01-01
BACKGROUND: Prescribing matters (particularly budget setting and research into prescribing variation between doctors) have been handicapped by the absence of credible measures of the volume of drugs prescribed. AIM: To use the defined daily dose (DDD) method to study variation in the volume and cost of drugs prescribed across the seven main British National Formulary (BNF) chapters with a view to comparing different methods of setting prescribing budgets. METHOD: Study of one year of prescribing statistics from all 129 general practices in Lothian, covering 808,059 patients: analyses of prescribing statistics for 1995 to define volume and cost/volume of prescribing for one year for 10 groups of practices defined by the age and deprivation status of their patients, for seven BNF chapters; creation of prescribing budgets for 1996 for each individual practice based on the use of target volume and cost statistics; comparison of 1996 DDD-based budgets with those set using the conventional historical approach; and comparison of DDD-based budgets with budgets set using a capitation-based formula derived from local cost/patient information. RESULTS: The volume of drugs prescribed was affected by the age structure of the practices in BNF Chapters 1 (gastrointestinal), 2 (cardiovascular), and 6 (endocrine), and by deprivation structure for BNF Chapters 3 (respiratory) and 4 (central nervous system). Costs per DDD in the major BNF chapters were largely independent of age, deprivation structure, or fundholding status. Capitation and DDD-based budgets were similar to each other, but both differed substantially from historic budgets. One practice in seven gained or lost more than 100,000 Pounds per annum using DDD or capitation budgets compared with historic budgets. The DDD-based budget, but not the capitation-based budget, can be used to set volume-specific prescribing targets. CONCLUSIONS: DDD-based and capitation-based prescribing budgets can be set using a simple explanatory model and generalizable methods. In this study, both differed substantially from historic budgets. DDD budgets could be created to accommodate new prescribing strategies and raised or lowered to reflect local intentions to alter overall prescribing volume or cost targets. We recommend that future work on setting budgets and researching prescribing variations should be based on DDD statistics. PMID:10024703
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deasy, J.
The ultimate goal of radiotherapy treatment planning is to find a treatment that will yield a high tumor control probability (TCP) with an acceptable normal tissue complication probability (NTCP). Yet most treatment planning today is not based upon optimization of TCPs and NTCPs, but rather upon meeting physical dose and volume constraints defined by the planner. It has been suggested that treatment planning evaluation and optimization would be more effective if they were biologically and not dose/volume based, and this is the claim debated in this month’s Point/Counterpoint. After a brief overview of biologically and DVH based treatment planning bymore » the Moderator Colin Orton, Joseph Deasy (for biological planning) and Charles Mayo (against biological planning) will begin the debate. Some of the arguments in support of biological planning include: this will result in more effective dose distributions for many patients DVH-based measures of plan quality are known to have little predictive value there is little evidence that either D95 or D98 of the PTV is a good predictor of tumor control sufficient validated outcome prediction models are now becoming available and should be used to drive planning and optimization Some of the arguments against biological planning include: several decades of experience with DVH-based planning should not be discarded we do not know enough about the reliability and errors associated with biological models the radiotherapy community in general has little direct experience with side by side comparisons of DVH vs biological metrics and outcomes it is unlikely that a clinician would accept extremely cold regions in a CTV or hot regions in a PTV, despite having acceptable TCP values Learning Objectives: To understand dose/volume based treatment planning and its potential limitations To understand biological metrics such as EUD, TCP, and NTCP To understand biologically based treatment planning and its potential limitations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayo, C.
The ultimate goal of radiotherapy treatment planning is to find a treatment that will yield a high tumor control probability (TCP) with an acceptable normal tissue complication probability (NTCP). Yet most treatment planning today is not based upon optimization of TCPs and NTCPs, but rather upon meeting physical dose and volume constraints defined by the planner. It has been suggested that treatment planning evaluation and optimization would be more effective if they were biologically and not dose/volume based, and this is the claim debated in this month’s Point/Counterpoint. After a brief overview of biologically and DVH based treatment planning bymore » the Moderator Colin Orton, Joseph Deasy (for biological planning) and Charles Mayo (against biological planning) will begin the debate. Some of the arguments in support of biological planning include: this will result in more effective dose distributions for many patients DVH-based measures of plan quality are known to have little predictive value there is little evidence that either D95 or D98 of the PTV is a good predictor of tumor control sufficient validated outcome prediction models are now becoming available and should be used to drive planning and optimization Some of the arguments against biological planning include: several decades of experience with DVH-based planning should not be discarded we do not know enough about the reliability and errors associated with biological models the radiotherapy community in general has little direct experience with side by side comparisons of DVH vs biological metrics and outcomes it is unlikely that a clinician would accept extremely cold regions in a CTV or hot regions in a PTV, despite having acceptable TCP values Learning Objectives: To understand dose/volume based treatment planning and its potential limitations To understand biological metrics such as EUD, TCP, and NTCP To understand biologically based treatment planning and its potential limitations.« less
Large-amplitude jumps and non-Gaussian dynamics in highly concentrated hard sphere fluids.
Saltzman, Erica J; Schweizer, Kenneth S
2008-05-01
Our microscopic stochastic nonlinear Langevin equation theory of activated dynamics has been employed to study the real-space van Hove function of dense hard sphere fluids and suspensions. At very short times, the van Hove function is a narrow Gaussian. At sufficiently high volume fractions, such that the entropic barrier to relaxation is greater than the thermal energy, its functional form evolves with time to include a rapidly decaying component at small displacements and a long-range exponential tail. The "jump" or decay length scale associated with the tail increases with time (or particle root-mean-square displacement) at fixed volume fraction, and with volume fraction at the mean alpha relaxation time. The jump length at the alpha relaxation time is predicted to be proportional to a measure of the decoupling of self-diffusion and structural relaxation. At long times corresponding to mean displacements of order a particle diameter, the volume fraction dependence of the decay length disappears. A good superposition of the exponential tail feature based on the jump length as a scaling variable is predicted at high volume fractions. Overall, the theoretical results are in good accord with recent simulations and experiments. The basic aspects of the theory are also compared with a classic jump model and a dynamically facilitated continuous time random-walk model. Decoupling of the time scales of different parts of the relaxation process predicted by the theory is qualitatively similar to facilitated dynamics models based on the concept of persistence and exchange times if the elementary event is assumed to be associated with transport on a length scale significantly smaller than the particle size.
Trends in Performance-Based Funding. Data Points: Volume 5, Issue 19
ERIC Educational Resources Information Center
American Association of Community Colleges, 2017
2017-01-01
States' use of postsecondary performance-based funding is intended to encourage colleges to improve student outcomes. The model relies on indicators such as course completion, time to degree, transfer rates, number of credentials awarded and the number of low-income and minority graduates served. Currently, 21 states use performance-based funding…
Long-term hydrological simulation based on the Soil Conservation Service curve number
NASA Astrophysics Data System (ADS)
Mishra, Surendra Kumar; Singh, Vijay P.
2004-05-01
Presenting a critical review of daily flow simulation models based on the Soil Conservation Service curve number (SCS-CN), this paper introduces a more versatile model based on the modified SCS-CN method, which specializes into seven cases. The proposed model was applied to the Hemavati watershed (area = 600 km2) in India and was found to yield satisfactory results in both calibration and validation. The model conserved monthly and annual runoff volumes satisfactorily. A sensitivity analysis of the model parameters was performed, including the effect of variation in storm duration. Finally, to investigate the model components, all seven variants of the modified version were tested for their suitability.
A Volterra series-based method for extracting target echoes in the seafloor mining environment.
Zhao, Haiming; Ji, Yaqian; Hong, Yujiu; Hao, Qi; Ma, Liyong
2016-09-01
The purpose of this research was to evaluate the applicability of the Volterra adaptive method to predict the target echo of an ultrasonic signal in an underwater seafloor mining environment. There is growing interest in mining of seafloor minerals because they offer an alternative source of rare metals. Mining the minerals cause the seafloor sediments to be stirred up and suspended in sea water. In such an environment, the target signals used for seafloor mapping are unable to be detected because of the unavoidable presence of volume reverberation induced by the suspended sediments. The detection of target signals in reverberation is currently performed using a stochastic model (for example, the autoregressive (AR) model) based on the statistical characterisation of reverberation. However, we examined a new method of signal detection in volume reverberation based on the Volterra series by confirming that the reverberation is a chaotic signal and generated by a deterministic process. The advantage of this method over the stochastic model is that attributions of the specific physical process are considered in the signal detection problem. To test the Volterra series based method and its applicability to target signal detection in the volume reverberation environment derived from the seafloor mining process, we simulated the real-life conditions of seafloor mining in a water filled tank of dimensions of 5×3×1.8m. The bottom of the tank was covered with 10cm of an irregular sand layer under which 5cm of an irregular cobalt-rich crusts layer was placed. The bottom was interrogated by an acoustic wave generated as 16μs pulses of 500kHz frequency. This frequency is demonstrated to ensure a resolution on the order of one centimetre, which is adequate in exploration practice. Echo signals were collected with a data acquisition card (PCI 1714 UL, 12-bit). Detection of the target echo in these signals was performed by both the Volterra series based model and the AR model. The results obtained confirm that the Volterra series based method is more efficient in the detection of the signal in reverberation than the conventional AR model (the accuracy is 80% for the PIM-Volterra prediction model versus 40% for the AR model). Copyright © 2016 Elsevier B.V. All rights reserved.
Voxel-Based 3-D Tree Modeling from Lidar Images for Extracting Tree Structual Information
NASA Astrophysics Data System (ADS)
Hosoi, F.
2014-12-01
Recently, lidar (light detection and ranging) has been used to extracting tree structural information. Portable scanning lidar systems can capture the complex shape of individual trees as a 3-D point-cloud image. 3-D tree models reproduced from the lidar-derived 3-D image can be used to estimate tree structural parameters. We have proposed the voxel-based 3-D modeling for extracting tree structural parameters. One of the tree parameters derived from the voxel modeling is leaf area density (LAD). We refer to the method as the voxel-based canopy profiling (VCP) method. In this method, several measurement points surrounding the canopy and optimally inclined laser beams are adopted for full laser beam illumination of whole canopy up to the internal. From obtained lidar image, the 3-D information is reproduced as the voxel attributes in the 3-D voxel array. Based on the voxel attributes, contact frequency of laser beams on leaves is computed and LAD in each horizontal layer is obtained. This method offered accurate LAD estimation for individual trees and woody canopy trees. For more accurate LAD estimation, the voxel model was constructed by combining airborne and portable ground-based lidar data. The profiles obtained by the two types of lidar complemented each other, thus eliminating blind regions and yielding more accurate LAD profiles than could be obtained by using each type of lidar alone. Based on the estimation results, we proposed an index named laser beam coverage index, Ω, which relates to the lidar's laser beam settings and a laser beam attenuation factor. It was shown that this index can be used for adjusting measurement set-up of lidar systems and also used for explaining the LAD estimation error using different types of lidar systems. Moreover, we proposed a method to estimate woody material volume as another application of the voxel tree modeling. In this method, voxel solid model of a target tree was produced from the lidar image, which is composed of consecutive voxels that filled the outer surface and the interior of the stem and large branches. From the model, the woody material volume of any part of the target tree can be directly calculated easily by counting the number of corresponding voxels and multiplying the result by the per-voxel volume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Ben; Li, Peiwen; Chan, Cholik
With an auxiliary large capacity thermal storage using phase change material (PCM), Concentrated Solar Power (CSP) is a promising technology for high efficiency solar energy utilization. In a thermal storage system, a dual-media thermal storage tank is typically adopted in industry for the purpose of reducing the use of the heat transfer fluid (HTF) which is usually expensive. While the sensible heat storage system (SHSS) has been well studied, a dual-media latent heat storage system (LHSS) still needs more attention and study. The volume sizing of the thermal storage tank, considering daily cyclic operations, is of particular significance. In thismore » paper, a general volume sizing strategy for LHSS is proposed, based on an enthalpy-based 1D transient model. One example was presented to demonstrate how to apply this strategy to obtain an actual storage tank volume. With this volume, a LHSS can supply heat to a thermal power plant with the HTF at temperatures above a cutoff point during a desired 6 hours of operation. This general volume sizing strategy is believed to be of particular interest for the solar thermal power industry.« less
Xu, Ben; Li, Peiwen; Chan, Cholik; ...
2014-12-18
With an auxiliary large capacity thermal storage using phase change material (PCM), Concentrated Solar Power (CSP) is a promising technology for high efficiency solar energy utilization. In a thermal storage system, a dual-media thermal storage tank is typically adopted in industry for the purpose of reducing the use of the heat transfer fluid (HTF) which is usually expensive. While the sensible heat storage system (SHSS) has been well studied, a dual-media latent heat storage system (LHSS) still needs more attention and study. The volume sizing of the thermal storage tank, considering daily cyclic operations, is of particular significance. In thismore » paper, a general volume sizing strategy for LHSS is proposed, based on an enthalpy-based 1D transient model. One example was presented to demonstrate how to apply this strategy to obtain an actual storage tank volume. With this volume, a LHSS can supply heat to a thermal power plant with the HTF at temperatures above a cutoff point during a desired 6 hours of operation. This general volume sizing strategy is believed to be of particular interest for the solar thermal power industry.« less
Topographic Effects on Geologic Mass Movements
NASA Technical Reports Server (NTRS)
Baloga, Stephen M.; Frey, Herbert (Technical Monitor)
2000-01-01
This report describes research directed toward understanding the response of volcanic lahars and lava flows to changes in the topography along the path of the flow. We have used a variety of steady-state and time-dependent models of lahars and lava flows to calculate the changes in flow dynamics due to variable topography. These models are based on first-order partial differential equations for the local conservation of volume. A global volume conservation requirement is also imposed to determine the extent of the flow as a function of time and the advance rate. Simulated DEMs have been used in this report.
Delgado San Martin, J A; Worthington, P; Yates, J W T
2015-04-01
Subcutaneous tumour xenograft volumes are generally measured using callipers. This method is susceptible to inter- and intra-observer variability and systematic inaccuracies. Non-invasive 3D measurement using ultrasound and magnetic resonance imaging (MRI) have been considered, but require immobilization of the animal. An infrared-based 3D time-of-flight (3DToF) camera was used to acquire a depth map of tumour-bearing mice. A semi-automatic algorithm based on parametric surfaces was applied to estimate tumour volume. Four clay mouse models and 18 tumour-bearing mice were assessed using callipers (applying both prolate spheroid and ellipsoid models) and 3DToF methods, and validated using tumour weight. Inter-experimentalist variability could be up to 25% in the calliper method. Experimental results demonstrated good consistency and relatively low error rates for the 3DToF method, in contrast to biased overestimation using callipers. Accuracy is currently limited by camera performance; however, we anticipate the next generation 3DToF cameras will be able to support the development of a practical system. Here, we describe an initial proof of concept for a non-invasive, non-immobilized, morphology-independent, economical and potentially more precise tumour volume assessment technique. This affordable technique should maximize the datapoints per animal, by reducing the numbers required in experiments and reduce their distress. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Isothermal titration calorimetry in nanoliter droplets with subsecond time constants.
Lubbers, Brad; Baudenbacher, Franz
2011-10-15
We reduced the reaction volume in microfabricated suspended-membrane titration calorimeters to nanoliter droplets and improved the sensitivities to below a nanowatt with time constants of around 100 ms. The device performance was characterized using exothermic acid-base neutralizations and a detailed numerical model. The finite element based numerical model allowed us to determine the sensitivities within 1% and the temporal dynamics of the temperature rise in neutralization reactions as a function of droplet size. The model was used to determine the optimum calorimeter design (membrane size and thickness, junction area, and thermopile thickness) and sensitivities for sample volumes of 1 nL for silicon nitride and polymer membranes. We obtained a maximum sensitivity of 153 pW/(Hz)(1/2) for a 1 μm SiN membrane and 79 pW/(Hz)(1/2) for a 1 μm polymer membrane. The time constant of the calorimeter system was determined experimentally using a pulsed laser to increase the temperature of nanoliter sample volumes. For a 2.5 nanoliter sample volume, we experimentally determined a noise equivalent power of 500 pW/(Hz)(1/2) and a 1/e time constant of 110 ms for a modified commercially available infrared sensor with a thin-film thermopile. Furthermore, we demonstrated detection of 1.4 nJ reaction energies from injection of 25 pL of 1 mM HCl into a 2.5 nL droplet of 1 mM NaOH. © 2011 American Chemical Society
A simple microviscometric approach based on Brownian motion tracking.
Hnyluchová, Zuzana; Bjalončíková, Petra; Karas, Pavel; Mravec, Filip; Halasová, Tereza; Pekař, Miloslav; Kubala, Lukáš; Víteček, Jan
2015-02-01
Viscosity-an integral property of a liquid-is traditionally determined by mechanical instruments. The most pronounced disadvantage of such an approach is the requirement of a large sample volume, which poses a serious obstacle, particularly in biology and biophysics when working with limited samples. Scaling down the required volume by means of microviscometry based on tracking the Brownian motion of particles can provide a reasonable alternative. In this paper, we report a simple microviscometric approach which can be conducted with common laboratory equipment. The core of this approach consists in a freely available standalone script to process particle trajectory data based on a Newtonian model. In our study, this setup allowed the sample to be scaled down to 10 μl. The utility of the approach was demonstrated using model solutions of glycerine, hyaluronate, and mouse blood plasma. Therefore, this microviscometric approach based on a newly developed freely available script can be suggested for determination of the viscosity of small biological samples (e.g., body fluids).
A Volume-Fraction Based Two-Phase Constitutive Model for Blood
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Rui; Massoudi, Mehrdad; Hund, S.J.
2008-06-01
Mechanically-induced blood trauma such as hemolysis and thrombosis often occurs at microscopic channels, steps and crevices within cardiovascular devices. A predictive mathematical model based on a broad understanding of hemodynamics at micro scale is needed to mitigate these effects, and is the motivation of this research project. Platelet transport and surface deposition is important in thrombosis. Microfluidic experiments have previously revealed a significant impact of red blood cell (RBC)-plasma phase separation on platelet transport [5], whereby platelet localized concentration can be enhanced due to a non-uniform distribution of RBCs of blood flow in a capillary tube and sudden expansion. However,more » current platelet deposition models either totally ignored RBCs in the fluid by assuming a zero sample hematocrit or treated them as being evenly distributed. As a result, those models often underestimated platelet advection and deposition to certain areas [2]. The current study aims to develop a two-phase blood constitutive model that can predict phase separation in a RBC-plasma mixture at the micro scale. The model is based on a sophisticated theory known as theory of interacting continua, i.e., mixture theory. The volume fraction is treated as a field variable in this model, which allows the prediction of concentration as well as velocity profiles of both RBC and plasma phases. The results will be used as the input of successive platelet deposition models.« less
NASA Astrophysics Data System (ADS)
Mukuhira, Yusuke; Asanuma, Hiroshi; Ito, Takatoshi; Häring, Markus
2016-04-01
Occurrence of induced seismicity with large magnitude is critical environmental issues associated with fluid injection for shale gas/oil extraction, waste water disposal, carbon capture and storage, and engineered geothermal systems (EGS). Studies for prediction of the hazardous seismicity and risk assessment of induced seismicity has been activated recently. Many of these studies are based on the seismological statistics and these models use the information of the occurrence time and event magnitude. We have originally developed physics based model named "possible seismic moment model" to evaluate seismic activity and assess seismic moment which can be ready to release. This model is totally based on microseismic information of occurrence time, hypocenter location and magnitude (seismic moment). This model assumes existence of representative parameter having physical meaning that release-able seismic moment per rock volume (seismic moment density) at given field. Seismic moment density is to be estimated from microseismic distribution and their seismic moment. In addition to this, stimulated rock volume is also inferred by progress of microseismic cloud at given time and this quantity can be interpreted as the rock volume which can release seismic energy due to weakening effect of normal stress by injected fluid. Product of these two parameters (equation (1)) provide possible seismic moment which can be released from current stimulated zone as a model output. Difference between output of this model and observed cumulative seismic moment corresponds the seismic moment which will be released in future, based on current stimulation conditions. This value can be translated into possible maximum magnitude of induced seismicity in future. As this way, possible seismic moment can be used to have feedback to hydraulic stimulation operation in real time as an index which can be interpreted easily and intuitively. Possible seismic moment is defined as equation (1), where D is seismic moment density (Mo/m3) and V stim is stimulated rock volume (m3). Mopossible = D ∗ V stim(1) We applied this conceptual model to real microseismic data set from Basel EGS project where several induced seismicity with large magnitude occurred and brought constructive damage. Using the hypocenter location determined by the researcher of Tohoku Univ., Japan and moment magnitude estimated from Geothermal Explorers Ltd., operating company, we were able to estimate reasonable seismic moment density meaning that one representative parameter exists and can characterize seismic activity at Basel at each time step. With stimulated rock volume which was also inferred from microseismic information, we estimated possible seismic moment and assess the difference with observed value. Possible seismic moment significantly increased after shut-in when the seismic cloud (stimulated zone) mostly progressed, resulting that the difference with the observed cumulative seismic moment automatically became larger. This suggests that there is moderate seismic moment which will be released in near future. In next few hours, the largest event actually occurred. Therefore, our proposed model was successfully able to forecast occurrence of the large events. Furthermore, best forecast of maximum magnitude was Mw 3 level and the largest event was Mw 3.41, showing reasonable performance in terms of quantitative forecast in magnitude. Our attempt to assess the seismic activity from microseismic information was successful and it also suggested magnitude release can be correlate with the expansion of seismic cloud as the definition of possible seismic moment model indicates. This relationship has been observed in microseismic observational study and several previous study also suggested their correlation with stress released rock volume. Our model showed harmonic results with these studies and provide practical method having clear physical meaning to assess the seismic activity in real time, based on microseismic data.
Rodeberg, David A.; Stoner, Julie A.; Garcia-Henriquez, Norbert; Randall, R. Lor; Spunt, Sheri L.; Arndt, Carola A.; Kao, Simon; Paidas, Charles N.; Million, Lynn; Hawkins, Douglas S.
2010-01-01
Background To compare tumor volume and patient weight vs. traditional factors of tumor diameter and patient age, to determine which parameters best discriminates outcome among intermediate risk RMS patients. Methods Complete patient information for non-metastatic RMS patients enrolled in the Children’s Oncology Group (COG) intermediate risk study D9803 (1999–2005) was available for 370 patients. The Kaplan-Meier method was used to estimate survival distributions. A recursive partitioning model was used to identify prognostic factors associated with event-free survival (EFS). Cox-proportional hazards regression models were used to estimate the association between patient characteristics and the risk of failure or death. Results For all intermediate risk patients with RMS, a recursive partitioning algorithm for EFS suggests that prognostic groups should optimally be defined by tumor volume (transition point 20 cm3), weight (transition point 50 kg), and embryonal histology. Tumor volume and patient weight added significant outcome information to the standard prognostic factors including tumor diameter and age (p=0.02). The ability to resect the tumor completely was not significantly associated with the size of the patient, and patient weight did not significantly modify the association between tumor volume and EFS after adjustment for standard risk factors (p=0.2). Conclusion The factors most strongly associated with EFS were tumor volume, patient weight, and histology. Based on regression modeling, volume and weight are superior predictors of outcome compared to tumor diameter and patient age in children with intermediate risk RMS. Prognostic performance of tumor volume and patient weight should be assessed in an independent prospective study. PMID:24048802
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1993-08-01
Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 containsmore » an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.« less
Transportation Energy Conservation Data Book: A Selected Bibliography. Edition 3,
1978-11-01
Charlottesville, VA 22901 TITLE: Couputer-Based Resource Accounting Model TT1.1: Methodology for the Design of Urban for Automobile Technology Impact...Evaluation System ACCOUNTING; INDUSTRIAL SECTOR; ENERGY tPIESi Documentation. volume 6. CONSUM PTION: PERFORANCE: DESIGN : NASTE MEAT: Methodology for... Methodology for the Design of Urban Transportation 000172 Energy Flows In the U.S., 1973 and 1974. Volume 1: Methodology * $opdate to the Fational Energy
Mapping debris-flow hazard in Honolulu using a DEM
Ellen, Stephen D.; Mark, Robert K.; ,
1993-01-01
A method for mapping hazard posed by debris flows has been developed and applied to an area near Honolulu, Hawaii. The method uses studies of past debris flows to characterize sites of initiation, volume at initiation, and volume-change behavior during flow. Digital simulations of debris flows based on these characteristics are then routed through a digital elevation model (DEM) to estimate degree of hazard over the area.
Hu, Miao; Zhong, Zhangdui; Ni, Minming; Baiocchi, Andrea
2016-11-01
Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors' best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well.
Hu, Miao; Zhong, Zhangdui; Ni, Minming; Baiocchi, Andrea
2016-01-01
Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors’ best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well. PMID:27809285
Fuzzy Regression Prediction and Application Based on Multi-Dimensional Factors of Freight Volume
NASA Astrophysics Data System (ADS)
Xiao, Mengting; Li, Cheng
2018-01-01
Based on the reality of the development of air cargo, the multi-dimensional fuzzy regression method is used to determine the influencing factors, and the three most important influencing factors of GDP, total fixed assets investment and regular flight route mileage are determined. The system’s viewpoints and analogy methods, the use of fuzzy numbers and multiple regression methods to predict the civil aviation cargo volume. In comparison with the 13th Five-Year Plan for China’s Civil Aviation Development (2016-2020), it is proved that this method can effectively improve the accuracy of forecasting and reduce the risk of forecasting. It is proved that this model predicts civil aviation freight volume of the feasibility, has a high practical significance and practical operation.
Kinetic Models for Adiabatic Reversible Expansion of a Monatomic Ideal Gas.
ERIC Educational Resources Information Center
Chang, On-Kok
1983-01-01
A fixed amount of an ideal gas is confined in an adiabatic cylinder and piston device. The relation between temperature and volume in initial/final phases can be derived from the first law of thermodynamics. However, the relation can also be derived based on kinetic models. Several of these models are discussed. (JN)
Thermal modeling of carbon-epoxy laminates in fire environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGurn, Matthew T.; DesJardin, Paul Edward; Dodd, Amanda B.
2010-10-01
A thermal model is developed for the response of carbon-epoxy composite laminates in fire environments. The model is based on a porous media description that includes the effects of gas transport within the laminate along with swelling. Model comparisons are conducted against the data from Quintere et al. Simulations are conducted for both coupon level and intermediate scale one-sided heating tests. Comparisons of the heat release rate (HRR) as well as the final products (mass fractions, volume percentages, porosity, etc.) are conducted. Overall, the agreement between available the data and model is excellent considering the simplified approximations to account formore » flame heat flux. A sensitivity study using a newly developed swelling model shows the importance of accounting for laminate expansion for the prediction of burnout. Excellent agreement is observed between the model and data of the final product composition that includes porosity, mass fractions and volume expansion ratio.« less
Lee, Jung-Min; Levy, Doron
2016-01-01
High-grade serous ovarian cancer (HGSOC) represents the majority of ovarian cancers and accounts for the largest proportion of deaths from the disease. A timely detection of low volume HGSOC should be the goal of any screening studies. However, numerous transvaginal ultrasound (TVU) detection-based population studies aimed at detecting low-volume disease have not yielded reduced mortality rates. A quantitative invalidation of TVU as an effective HGSOC screening strategy is a necessary next step. Herein, we propose a mathematical model for a quantitative explanation on the reported failure of TVU-based screening to improve HGSOC low-volume detectability and overall survival.We develop a novel in silico mathematical assessment of the efficacy of a unimodal TVU monitoring regimen as a strategy aimed at detecting low-volume HGSOC in cancer-positive cases, defined as cases for which the inception of the first malignant cell has already occurred. Our findings show that the median window of opportunity interval length for TVU monitoring and HGSOC detection is approximately 1.76 years. This does not translate into reduced mortality levels or improved detection accuracy in an in silico cohort across multiple TVU monitoring frequencies or detection sensitivities. We demonstrate that even a semiannual, unimodal TVU monitoring protocol is expected to miss detectable HGSOC. Lastly, we find that circa 50% of the simulated HGSOC growth curves never reach the baseline detectability threshold, and that on average, 5–7 infrequent, rate-limiting stochastic changes in the growth parameters are associated with reaching HGSOC detectability and mortality thresholds respectively. Focusing on a malignancy poorly studied in the mathematical oncology community, our model captures the dynamic, temporal evolution of HGSOC progression. Our mathematical model is consistent with recent case reports and prospective TVU screening population studies, and provides support to the empirical recommendation against frequent HGSOC screening. PMID:27257824
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houweling, Antonetta C., E-mail: A.Houweling@umcutrecht.n; Philippens, Marielle E.P.; Dijkema, Tim
2010-03-15
Purpose: The dose-response relationship of the parotid gland has been described most frequently using the Lyman-Kutcher-Burman model. However, various other normal tissue complication probability (NTCP) models exist. We evaluated in a large group of patients the value of six NTCP models that describe the parotid gland dose response 1 year after radiotherapy. Methods and Materials: A total of 347 patients with head-and-neck tumors were included in this prospective parotid gland dose-response study. The patients were treated with either conventional radiotherapy or intensity-modulated radiotherapy. Dose-volume histograms for the parotid glands were derived from three-dimensional dose calculations using computed tomography scans. Stimulatedmore » salivary flow rates were measured before and 1 year after radiotherapy. A threshold of 25% of the pretreatment flow rate was used to define a complication. The evaluated models included the Lyman-Kutcher-Burman model, the mean dose model, the relative seriality model, the critical volume model, the parallel functional subunit model, and the dose-threshold model. The goodness of fit (GOF) was determined by the deviance and a Monte Carlo hypothesis test. Ranking of the models was based on Akaike's information criterion (AIC). Results: None of the models was rejected based on the evaluation of the GOF. The mean dose model was ranked as the best model based on the AIC. The TD{sub 50} in these models was approximately 39 Gy. Conclusions: The mean dose model was preferred for describing the dose-response relationship of the parotid gland.« less
Mesoscale Modeling of LX-17 Under Isentropic Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Springer, H K; Willey, T M; Friedman, G
Mesoscale simulations of LX-17 incorporating different equilibrium mixture models were used to investigate the unreacted equation-of-state (UEOS) of TATB. Candidate TATB UEOS were calculated using the equilibrium mixture models and benchmarked with mesoscale simulations of isentropic compression experiments (ICE). X-ray computed tomography (XRCT) data provided the basis for initializing the simulations with realistic microstructural details. Three equilibrium mixture models were used in this study. The single constituent with conservation equations (SCCE) model was based on a mass-fraction weighted specific volume and the conservation of mass, momentum, and energy. The single constituent equation-of-state (SCEOS) model was based on a mass-fraction weightedmore » specific volume and the equation-of-state of the constituents. The kinetic energy averaging (KEA) model was based on a mass-fraction weighted particle velocity mixture rule and the conservation equations. The SCEOS model yielded the stiffest TATB EOS (0.121{micro} + 0.4958{micro}{sup 2} + 2.0473{micro}{sup 3}) and, when incorporated in mesoscale simulations of the ICE, demonstrated the best agreement with VISAR velocity data for both specimen thicknesses. The SCCE model yielded a relatively more compliant EOS (0.1999{micro}-0.6967{micro}{sup 2} + 4.9546{micro}{sup 3}) and the KEA model yielded the most compliant EOS (0.1999{micro}-0.6967{micro}{sup 2}+4.9546{micro}{sup 3}) of all the equilibrium mixture models. Mesoscale simulations with the lower density TATB adiabatic EOS data demonstrated the least agreement with VISAR velocity data.« less
Yin, Xiaoming; Guo, Yang; Li, Weiguo; Huo, Eugene; Zhang, Zhuoli; Nicolai, Jodi; Kleps, Robert A.; Hernando, Diego; Katsaggelos, Aggelos K.; Omary, Reed A.
2012-01-01
Purpose: To demonstrate the feasibility of using chemical shift magnetic resonance (MR) imaging fat-water separation methods for quantitative estimation of transcatheter lipiodol delivery to liver tissues. Materials and Methods: Studies were performed in accordance with institutional Animal Care and Use Committee guidelines. Proton nuclear MR spectroscopy was first performed to identify lipiodol spectral peaks and relative amplitudes. Next, phantoms were constructed with increasing lipiodol-water volume fractions. A multiecho chemical shift–based fat-water separation method was used to quantify lipiodol concentration within each phantom. Six rats served as controls; 18 rats underwent catheterization with digital subtraction angiography guidance for intraportal infusion of a 15%, 30%, or 50% by volume lipiodol-saline mixture. MR imaging measurements were used to quantify lipiodol delivery to each rat liver. Lipiodol concentration maps were reconstructed by using both single-peak and multipeak chemical shift models. Intraclass and Spearman correlation coefficients were calculated for statistical comparison of MR imaging–based lipiodol concentration and volume measurements to reference standards (known lipiodol phantom compositions and the infused lipiodol dose during rat studies). Results: Both single-peak and multipeak measurements were well correlated to phantom lipiodol concentrations (r2 > 0.99). Lipiodol volume measurements were progressively and significantly higher when comparing between animals receiving different doses (P < .05 for each comparison). MR imaging–based lipiodol volume measurements strongly correlated with infused dose (intraclass correlation coefficients > 0.93, P < .001) with both single- and multipeak approaches. Conclusion: Chemical shift MR imaging fat-water separation methods can be used for quantitative measurements of lipiodol delivery to liver tissues. © RSNA, 2012 PMID:22623693
Jin, Cheng; Feng, Jianjiang; Wang, Lei; Yu, Heng; Liu, Jiang; Lu, Jiwen; Zhou, Jie
2018-05-01
In this paper, we present an approach for left atrial appendage (LAA) multi-phase fast segmentation and quantitative assisted diagnosis of atrial fibrillation (AF) based on 4D-CT data. We take full advantage of the temporal dimension information to segment the living, flailed LAA based on a parametric max-flow method and graph-cut approach to build 3-D model of each phase. To assist the diagnosis of AF, we calculate the volumes of 3-D models, and then generate a "volume-phase" curve to calculate the important dynamic metrics: ejection fraction, filling flux, and emptying flux of the LAA's blood by volume. This approach demonstrates more precise results than the conventional approaches that calculate metrics by area, and allows for the quick analysis of LAA-volume pattern changes of in a cardiac cycle. It may also provide insight into the individual differences in the lesions of the LAA. Furthermore, we apply support vector machines (SVMs) to achieve a quantitative auto-diagnosis of the AF by exploiting seven features from volume change ratios of the LAA, and perform multivariate logistic regression analysis for the risk of LAA thrombosis. The 100 cases utilized in this research were taken from the Philips 256-iCT. The experimental results demonstrate that our approach can construct the 3-D LAA geometries robustly compared to manual annotations, and reasonably infer that the LAA undergoes filling, emptying and re-filling, re-emptying in a cardiac cycle. This research provides a potential for exploring various physiological functions of the LAA and quantitatively estimating the risk of stroke in patients with AF. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ganguly, R; Choudhury, N
2012-04-15
AOT-based water in oil (w/o) microemulsions are one of the most extensively studied reverse micellar systems because of their rich phase behavior and their ability to form in the absence of any co-surfactant. The aggregation characteristics and interaction of the microemulsion droplets in these systems are known to be governed by AOT-oil compatibility and water to AOT molar ratio (w). In this manuscript by using Dynamic Light Scattering (DLS) and viscometry techniques, we show that droplet volume fraction too plays an important role in shaping the phase behavior of these microemulsions in dodecane. The phase separation characteristics and the evolution of the viscosity and the hydrodynamic radius of the microemulsion droplets on approaching the cloud points have thus been found to undergo complete transformation as one goes from low to high droplet volume fraction even at a fixed 'w'. Modeling of the DLS data attributes this to the weakening of inter droplet attractive interaction caused by the growing dominance of the excluded volume effect with increase in droplet volume fraction. In the literature, the inter droplet attractive interaction driven phase separation in these microemulsions is explained based on gas-liquid type phase transition, conceptualized in the framework of Baxter adhesive hard sphere theory. The modeling of our viscosity data, however, does not support such proposition as the characteristic stickiness parameter (τ(-1)) of the microemulsion droplets in this system remains much lower than the critical value (τ(c)(-1)≈10.25) required to enforce such phase transition. Copyright © 2012 Elsevier Inc. All rights reserved.
Partitioning standard base excess: a new approach.
Morgan, Thomas John
2011-12-01
'Standard' or 'extracellular' base excess (SBE) is a modified calculation using one-third the normal hemoglobin concentration. It is a 'CO(2)-invariant' expression of meta- bolic acid-base status integrated across interstitial, plasma and erythrocytic compartments (IPE). SBE also integrates conflicting physical chemical influences on metabolic acid-base status. Until recently attempts to quantify individual contributions to SBE, for example the plasma strong ion gap, failed to span the 'CO(2-)stable' IPE dimension. The first breakthrough was from Anstey, who determined the con- centration of unmeasured charged species referenced to the IPE domain using Wooten's physical chemical version of the Van Slyke equation. In this issue Drs Wolf and DeLand present a diagnostic tool based on an IPE model which dissects a version of SBE (BEnet) into nine independent (BEind) components, all referenced to the IPE domain. The reported components are excess/deficits of free water, chlo- ride, albumin, unmeasured ions, sodium, potassium, lactate, 'Ca-Mg' (a composite divalent cation entity), and phosphate. The model also reports individualised volumes of plasma, erythrocytes and interstitial fluid. The tool is an original contribution, but there are concerns. The impact of assum- ing fixed relationships between arterial and venous acid-base and saturation values in sepsis, anaemia and in differing shock states is unclear. Clinicians are also unlikely to accept that unique, accurate IPE volume determinations can be derived from a single set of blood gas and biochemistry results. Nevertheless, volume determinations aside, the tool is likely to become a valuable addition to the diagnostic armamentarium.
Assessment of growth dynamics of human cranium middle fossa in foetal period.
Skomra, Andrzej; Kędzia, Alicja; Dudek, Krzysztof; Bogacz, Wiesław
2014-01-01
Available literature analysis demonstrated smallness of studies of cranial base. The goal of the study was to analyse the medial fossa of the human cranium in the foetal period against other fossae. Survey material consisted of 110 human foetuses at a morphological age of 16-28 weeks of foetal life, CRL 98-220 mm. Anthropological, preparation method, reverse method and statistical analysis were utilized. The survey incorporated the following computer programmes: Renishaw, TraceSurf, AutoCAD, CATIA. The reverse method seems especially interesting (impression with polysiloxane (silicone elastomer of high adhesive power used in dentistry) with 18 D 4823 activator. Elicited impression accurately reflected complex shape of cranium base. On assessing the relative rate of cranium medial fossa, the rate was found to be stable (linear model) for the whole of the analysed period and is 0.19%/week, which stands for the gradual and steady growth of the middle fossa in relation to the whole of the cranium base. At the same time, from the 16th till 28th week of foetal life, relative volume of the cranium middle fossa increases more intensively than cranium anterior fossa, whereas the cranium middle fossa volume as compared with the cranium posterior fossa is definitely slower. In the analysed period, the growth rate of the cranium base middle fossa was bigger in the 4th and 5th weeks than in the 6th and 7th weeks of foetal life. The investigations revealed cranium base asymmetry of the left side. Furthermore, the anterior fossae volume on the left side is significantly bigger than the one of the fossae on the right side. Volume growth rate is more intensive in the 4th and 5th than in the 6th and 7th weeks of foetal life. In the examined period, the relative growth rate of cranium base middle fossa is 0.19%/week and it is stable - linear model. The study revealed correlations in the form of mathematical models, which enabled foetuses age assessment.
NASA Astrophysics Data System (ADS)
Zhang, Wei; Bi, Zhengzheng; Shen, Dehua
2017-02-01
This paper investigates the impact of investor structure on the price-volume relationship by simulating a continuous double auction market. Connected with the underlying mechanisms of the price-volume relationship, i.e., the Mixture of Distribution Hypothesis (MDH) and the Sequential Information Arrival Hypothesis (SIAH), the simulation results show that: (1) there exists a strong lead-lag relationship between the return volatility and trading volume when the number of informed investors is close to the number of uninformed investors in the market; (2) as more and more informed investors entering the market, the lead-lag relationship becomes weaker and weaker, while the contemporaneous relationship between the return volatility and trading volume becomes more prominent; (3) when the informed investors are in absolute majority, the market can achieve the new equilibrium immediately. Therefore, we can conclude that the investor structure is a key factor in affecting the price-volume relationship.
Vieira, Joana B; Ferreira-Santos, Fernando; Almeida, Pedro R; Barbosa, Fernando; Marques-Teixeira, João; Marsh, Abigail A
2015-12-01
Research suggests psychopathy is associated with structural brain alterations that may contribute to the affective and interpersonal deficits frequently observed in individuals with high psychopathic traits. However, the regional alterations related to different components of psychopathy are still unclear. We used voxel-based morphometry to characterize the structural correlates of psychopathy in a sample of 35 healthy adults assessed with the Triarchic Psychopathy Measure. Furthermore, we examined the regional grey matter alterations associated with the components described by the triarchic model. Our results showed that, after accounting for variation in total intracranial volume, age and IQ, overall psychopathy was negatively associated with grey matter volume in the left putamen and amygdala. Additional regression analysis with anatomical regions of interests revealed total triPM score was also associated with increased lateral orbitofrontal cortex (OFC) and caudate volume. Boldness was positively associated with volume in the right insula. Meanness was positively associated with lateral OFC and striatum volume, and negatively associated with amygdala volume. Finally, disinhibition was negatively associated with amygdala volume. Results highlight the contribution of both subcortical and cortical brain alterations for subclinical psychopathy and are discussed in light of prior research and theoretical accounts about the neurobiological bases of psychopathic traits. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Probing the Dusty Stellar Populations of the Local Volume Galaxies with JWST/MIRI
NASA Astrophysics Data System (ADS)
Jones, Olivia C.; Meixner, Margaret; Justtanont, Kay; Glasse, Alistair
2017-05-01
The Mid-Infrared Instrument (MIRI) for the James Webb Space Telescope (JWST) will revolutionize our understanding of infrared stellar populations in the Local Volume. Using the rich Spitzer-IRS spectroscopic data set and spectral classifications from the Surveying the Agents of Galaxy Evolution (SAGE)-Spectroscopic survey of more than 1000 objects in the Magellanic Clouds, the Grid of Red Supergiant and Asymptotic Giant Branch Star Model (grams), and the grid of YSO models by Robitaille et al., we calculate the expected flux densities and colors in the MIRI broadband filters for prominent infrared stellar populations. We use these fluxes to explore the JWST/MIRI colors and magnitudes for composite stellar population studies of Local Volume galaxies. MIRI color classification schemes are presented; these diagrams provide a powerful means of identifying young stellar objects, evolved stars, and extragalactic background galaxies in Local Volume galaxies with a high degree of confidence. Finally, we examine which filter combinations are best for selecting populations of sources based on their JWST colors.
Morphological phenotyping of mouse hearts using optical coherence tomography
NASA Astrophysics Data System (ADS)
Cua, Michelle; Lin, Eric; Lee, Ling; Sheng, Xiaoye; Wong, Kevin S. K.; Tibbits, Glen F.; Beg, Mirza Faisal; Sarunic, Marinko V.
2014-11-01
Transgenic mouse models have been instrumental in the elucidation of the molecular mechanisms behind many genetically based cardiovascular diseases such as Marfan syndrome (MFS). However, the characterization of their cardiac morphology has been hampered by the small size of the mouse heart. In this report, we adapted optical coherence tomography (OCT) for imaging fixed adult mouse hearts, and applied tools from computational anatomy to perform morphometric analyses. The hearts were first optically cleared and imaged from multiple perspectives. The acquired volumes were then corrected for refractive distortions, and registered and stitched together to form a single, high-resolution OCT volume of the whole heart. From this volume, various structures such as the valves and myofibril bundles were visualized. The volumetric nature of our dataset also allowed parameters such as wall thickness, ventricular wall masses, and luminal volumes to be extracted. Finally, we applied the entire acquisition and processing pipeline in a preliminary study comparing the cardiac morphology of wild-type mice and a transgenic mouse model of MFS.
Acoustic measurement of bubble size and position in a piezo driven inkjet printhead
NASA Astrophysics Data System (ADS)
van der Bos, Arjan; Jeurissen, Roger; de Jong, Jos; Stevens, Richard; Versluis, Michel; Reinten, Hans; van den Berg, Marc; Wijshoff, Herman; Lohse, Detlef
2008-11-01
A bubble can be entrained in the ink channel of a piezo-driven inkjet printhead, where it grows by rectified diffusion. If large enough, the bubble counteracts the pressure buildup at the nozzle, resulting in nozzle failure. Here an acoustic sizing method for the volume and position of the bubble is presented. The bubble response is detected by the piezo actuator itself, operating in a sensor mode. The method used to determine the volume and position of the bubble is based on a linear model in which the interaction between the bubble and the channel are included. This model predicts the acoustic signal for a given position and volume of the bubble. The inverse problem is to infer the position and volume of the bubble from the measured acoustic signal. By solving it, we can thus acoustically measure size and position of the bubble. The validity of the presented method is supported by time-resolved optical observations of the dynamics of the bubble within an optically accessible ink-jet channel.
Use of generalized linear models and digital data in a forest inventory of Northern Utah
Moisen, Gretchen G.; Edwards, Thomas C.
1999-01-01
Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.
A prospective cohort study on radiation-induced hypothyroidism: development of an NTCP model.
Boomsma, Marjolein J; Bijl, Hendrik P; Christianen, Miranda E M C; Beetz, Ivo; Chouvalova, Olga; Steenbakkers, Roel J H M; van der Laan, Bernard F A M; Wolffenbuttel, Bruce H R; Oosting, Sjoukje F; Schilstra, Cornelis; Langendijk, Johannes A
2012-11-01
To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced hypothyroidism. The thyroid-stimulating hormone (TSH) level of 105 patients treated with (chemo-) radiation therapy for head-and-neck cancer was prospectively measured during a median follow-up of 2.5 years. Hypothyroidism was defined as elevated serum TSH with decreased or normal free thyroxin (T4). A multivariate logistic regression model with bootstrapping was used to determine the most important prognostic variables for radiation-induced hypothyroidism. Thirty-five patients (33%) developed primary hypothyroidism within 2 years after radiation therapy. An NTCP model based on 2 variables, including the mean thyroid gland dose and the thyroid gland volume, was most predictive for radiation-induced hypothyroidism. NTCP values increased with higher mean thyroid gland dose (odds ratio [OR]: 1.064/Gy) and decreased with higher thyroid gland volume (OR: 0.826/cm(3)). Model performance was good with an area under the curve (AUC) of 0.85. This is the first prospective study resulting in an NTCP model for radiation-induced hypothyroidism. The probability of hypothyroidism rises with increasing dose to the thyroid gland, whereas it reduces with increasing thyroid gland volume. Copyright © 2012 Elsevier Inc. All rights reserved.
Microfocal angiography of the pulmonary vasculature
NASA Astrophysics Data System (ADS)
Clough, Anne V.; Haworth, Steven T.; Roerig, David T.; Linehan, John H.; Dawson, Christopher A.
1998-07-01
X-ray microfocal angiography provides a means of assessing regional microvascular perfusion parameters using residue detection of vascular indicators. As an application of this methodology, we studied the effects of alveolar hypoxia, a pulmonary vasoconstrictor, on the pulmonary microcirculation to determine changes in regional blood mean transit time, volume and flow between control and hypoxic conditions. Video x-ray images of a dog lung were acquired as a bolus of radiopaque contrast medium passed through the lobar vasculature. X-ray time-absorbance curves were acquired from arterial and microvascular regions-of-interest during both control and hypoxic alveolar gas conditions. A mathematical model based on indicator-dilution theory applied to image residue curves was applied to the data to determine changes in microvascular perfusion parameters. Sensitivity of the model parameters to the model assumptions was analyzed. Generally, the model parameter describing regional microvascular volume, corresponding to area under the microvascular absorbance curve, was the most robust. The results of the model analysis applied to the experimental data suggest a significant decrease in microvascular volume with hypoxia. However, additional model assumptions concerning the flow kinematics within the capillary bed may be required for assessing changes in regional microvascular flow and mean transit time from image residue data.
Application of Discrete Fracture Modeling and Upscaling Techniques to Complex Fractured Reservoirs
NASA Astrophysics Data System (ADS)
Karimi-Fard, M.; Lapene, A.; Pauget, L.
2012-12-01
During the last decade, an important effort has been made to improve data acquisition (seismic and borehole imaging) and workflow for reservoir characterization which has greatly benefited the description of fractured reservoirs. However, the geological models resulting from the interpretations need to be validated or calibrated against dynamic data. Flow modeling in fractured reservoirs remains a challenge due to the difficulty of representing mass transfers at different heterogeneity scales. The majority of the existing approaches are based on dual continuum representation where the fracture network and the matrix are represented separately and their interactions are modeled using transfer functions. These models are usually based on idealized representation of the fracture distribution which makes the integration of real data difficult. In recent years, due to increases in computer power, discrete fracture modeling techniques (DFM) are becoming popular. In these techniques the fractures are represented explicitly allowing the direct use of data. In this work we consider the DFM technique developed by Karimi-Fard et al. [1] which is based on an unstructured finite-volume discretization. The mass flux between two adjacent control-volumes is evaluated using an optimized two-point flux approximation. The result of the discretization is a list of control-volumes with the associated pore-volumes and positions, and a list of connections with the associated transmissibilities. Fracture intersections are simplified using a connectivity transformation which contributes considerably to the efficiency of the methodology. In addition, the method is designed for general purpose simulators and any connectivity based simulator can be used for flow simulations. The DFM technique is either used standalone or as part of an upscaling technique. The upscaling techniques are required for large reservoirs where the explicit representation of all fractures and faults is not possible. Karimi-Fard et al. [2] have developed an upscaling technique based on DFM representation. The original version of this technique was developed to construct a dual-porosity model from a discrete fracture description. This technique has been extended and generalized so it can be applied to a wide range of problems from reservoirs with a few or no fracture to highly fractured reservoirs. In this work, we present the application of these techniques to two three-dimensional fractured reservoirs constructed using real data. The first model contains more than 600 medium and large scale fractures. The fractures are not always connected which requires a general modeling technique. The reservoir has 50 wells (injectors and producers) and water flooding simulations are performed. The second test case is a larger reservoir with sparsely distributed faults. Single-phase simulations are performed with 5 producing wells. [1] Karimi-Fard M., Durlofsky L.J., and Aziz K. 2004. An efficient discrete-fracture model applicable for general-purpose reservoir simulators. SPE Journal, 9(2): 227-236. [2] Karimi-Fard M., Gong B., and Durlofsky L.J. 2006. Generation of coarse-scale continuum flow models from detailed fracture characterizations. Water Resources Research, 42(10): W10423.
Geodetic imaging: Reservoir monitoring using satellite interferometry
Vasco, D.W.; Wicks, C.; Karasaki, K.; Marques, O.
2002-01-01
Fluid fluxes within subsurface reservoirs give rise to surface displacements, particularly over periods of a year or more. Observations of such deformation provide a powerful tool for mapping fluid migration within the Earth, providing new insights into reservoir dynamics. In this paper we use Interferometric Synthetic Aperture Radar (InSAR) range changes to infer subsurface fluid volume strain at the Coso geothermal field. Furthermore, we conduct a complete model assessment, using an iterative approach to compute model parameter resolution and covariance matrices. The method is a generalization of a Lanczos-based technique which allows us to include fairly general regularization, such as roughness penalties. We find that we can resolve quite detailed lateral variations in volume strain both within the reservoir depth range (0.4-2.5 km) and below the geothermal production zone (2.5-5.0 km). The fractional volume change in all three layers of the model exceeds the estimated model parameter uncertainly by a factor of two or more. In the reservoir depth interval (0.4-2.5 km), the predominant volume change is associated with northerly and westerly oriented faults and their intersections. However, below the geothermal production zone proper [the depth range 2.5-5.0 km], there is the suggestion that both north- and northeast-trending faults may act as conduits for fluid flow.
Woody debris volume depletion through decay: implications for biomass and carbon accounting
Fraver, Shawn; Milo, Amy M.; Bradford, John B.; D'Amato, Anthony W.; Kenefic, Laura; Palik, Brian J.; Woodall, Christopher W.; Brissette, John
2013-01-01
Woody debris decay rates have recently received much attention because of the need to quantify temporal changes in forest carbon stocks. Published decay rates, available for many species, are commonly used to characterize deadwood biomass and carbon depletion. However, decay rates are often derived from reductions in wood density through time, which when used to model biomass and carbon depletion are known to underestimate rate loss because they fail to account for volume reduction (changes in log shape) as decay progresses. We present a method for estimating changes in log volume through time and illustrate the method using a chronosequence approach. The method is based on the observation, confirmed herein, that decaying logs have a collapse ratio (cross-sectional height/width) that can serve as a surrogate for the volume remaining. Combining the resulting volume loss with concurrent changes in wood density from the same logs then allowed us to quantify biomass and carbon depletion for three study species. Results show that volume, density, and biomass follow distinct depletion curves during decomposition. Volume showed an initial lag period (log dimensions remained unchanged), even while wood density was being reduced. However, once volume depletion began, biomass loss (the product of density and volume depletion) occurred much more rapidly than density alone. At the temporal limit of our data, the proportion of the biomass remaining was roughly half that of the density remaining. Accounting for log volume depletion, as demonstrated in this study, provides a comprehensive characterization of deadwood decomposition, thereby improving biomass-loss and carbon-accounting models.
Testing a ground-based canopy model using the wind river canopy crane
Robert Van Pelt; Malcolm P. North
1999-01-01
A ground-based canopy model that estimates the volume of occupied space in forest canopies was tested using the Wind River Canopy Crane. A total of 126 trees in a 0.25 ha area were measured from the ground and directly from a gondola suspended from the crane. The trees were located in a low elevation, old-growth forest in the southern Washington Cascades. The ground-...
Medical Image Retrieval: A Multimodal Approach
Cao, Yu; Steffey, Shawn; He, Jianbiao; Xiao, Degui; Tao, Cui; Chen, Ping; Müller, Henning
2014-01-01
Medical imaging is becoming a vital component of war on cancer. Tremendous amounts of medical image data are captured and recorded in a digital format during cancer care and cancer research. Facing such an unprecedented volume of image data with heterogeneous image modalities, it is necessary to develop effective and efficient content-based medical image retrieval systems for cancer clinical practice and research. While substantial progress has been made in different areas of content-based image retrieval (CBIR) research, direct applications of existing CBIR techniques to the medical images produced unsatisfactory results, because of the unique characteristics of medical images. In this paper, we develop a new multimodal medical image retrieval approach based on the recent advances in the statistical graphic model and deep learning. Specifically, we first investigate a new extended probabilistic Latent Semantic Analysis model to integrate the visual and textual information from medical images to bridge the semantic gap. We then develop a new deep Boltzmann machine-based multimodal learning model to learn the joint density model from multimodal information in order to derive the missing modality. Experimental results with large volume of real-world medical images have shown that our new approach is a promising solution for the next-generation medical imaging indexing and retrieval system. PMID:26309389
Homogenous Surface Nucleation of Solid Polar Stratospheric Cloud Particles
NASA Technical Reports Server (NTRS)
Tabazadeh, A.; Hamill, P.; Salcedo, D.; Gore, Warren J. (Technical Monitor)
2002-01-01
A general surface nucleation rate theory is presented for the homogeneous freezing of crystalline germs on the surfaces of aqueous particles. While nucleation rates in a standard classical homogeneous freezing rate theory scale with volume, the rates in a surface-based theory scale with surface area. The theory is used to convert volume-based information on laboratory freezing rates (in units of cu cm, seconds) of nitric acid trihydrate (NAT) and nitric acid dihydrate (NAD) aerosols into surface-based values (in units of sq cm, seconds). We show that a surface-based model is capable of reproducing measured nucleation rates of NAT and NAD aerosols from concentrated aqueous HNO3 solutions in the temperature range of 165 to 205 K. Laboratory measured nucleation rates are used to derive free energies for NAT and NAD germ formation in the stratosphere. NAD germ free energies range from about 23 to 26 kcal mole, allowing for fast and efficient homogeneous NAD particle production in the stratosphere. However, NAT germ formation energies are large (greater than 26 kcal mole) enough to prevent efficient NAT particle production in the stratosphere. We show that the atmospheric NAD particle production rates based on the surface rate theory are roughly 2 orders of magnitude larger than those obtained from a standard volume-based rate theory. Atmospheric volume and surface production of NAD particles will nearly cease in the stratosphere when denitrification in the air exceeds 40 and 78%, respectively. We show that a surface-based (volume-based) homogeneous freezing rate theory gives particle production rates, which are (not) consistent with both laboratory and atmospheric data on the nucleation of solid polar stratospheric cloud particles.
Characterization of fluid physics effects on cardiovascular response to microgravity (G-572)
NASA Technical Reports Server (NTRS)
Pantalos, George M.; Sharp, M. Keith; Woodruff, Stewart J.; Lorange, Richard D.; Bennett, Thomas E.; Sojka, Jan J.; Lemon, Mark W.
1993-01-01
The recognition and understanding of cardiovascular adaptation to spaceflight has experienced substantial advancement in the last several years. In-flight echocardiographic measurements of astronaut cardiac function on the Space Shuttle have documented a 15 percent reduction in both left ventricular volume index and stroke volume with a compensatory increase in heart rate to maintain cardiac output. To date, the reduced cardiac size and stroke volume have been presumed to be the consequence of the reduction in circulating fluid volume following diuresis and other physiological processes to reduce blood volume within a few days after orbital insertion. However, no specific mechanism for the reduced stroke volume has been elucidated. The following investigation proposes the use of a hydraulic model of the cardiovascular system to examine the possibility that the observed reduction in stroke volume may, in part, be related to fluid physics effects on heart function. The automated model is being prepared to fly as a GAS payload. The experimental apparatus consists of a pneumatically actuated, elliptical artificial ventricle connected to a closed-loop, hydraulic circuit with compliance and resistance elements to create physiologic pressure and flow conditions. The ventricle is instrumented with high-fidelity, acceleration-insensitive, catheter-tip pressure transducers (Millar Instruments) in the apex and base to determine the instantaneous ventricular pressures and (delta)P(sub LV) across the left ventricle (LVP(sub apex)-LVP(sub base). The ventricle is also instrumented with a flow probe and pressure transducers immediately upstream of the inflow valve and downstream of the outflow valve. The experiment will be microprocessor controlled with analog signals stored on the FM data tape recorder. By varying the circulating fluid volume, ventricular function can be determined for varying preload pressures with fixed afterload pressure. Pilot experiments on board the NASA KC-135 aircraft have demonstrated proof-of-concept and provided early support for the proposed hypothesis. A review of the pilot experiments and developmental progress on the GAS version of this experiment will be presented.
The correlation between emotional intelligence and gray matter volume in university students.
Tan, Yafei; Zhang, Qinglin; Li, Wenfu; Wei, Dongtao; Qiao, Lei; Qiu, Jiang; Hitchman, Glenn; Liu, Yijun
2014-11-01
A number of recent studies have investigated the neurological substrates of emotional intelligence (EI), but none of them have considered the neural correlates of EI that are measured using the Schutte Self-Report Emotional Intelligence Scale (SSREIS). This scale was developed based on the EI model of Salovey and Mayer (1990). In the present study, SSREIS was adopted to estimate EI. Meanwhile, magnetic resonance imaging (MRI) and voxel-based morphometry (VBM) were used to evaluate the gray matter volume (GMV) of 328 university students. Results found positive correlations between Monitor of Emotions and VBM measurements in the insula and orbitofrontal cortex. In addition, Utilization of Emotions was positively correlated with the GMV in the parahippocampal gyrus, but was negatively correlated with the VBM measurements in the fusiform gyrus and middle temporal gyrus. Furthermore, Social Ability had volume correlates in the vermis. These findings indicate that the neural correlates of the EI model, which primarily focuses on the abilities of individuals to appraise and express emotions, can also regulate and utilize emotions to solve problems. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berryman, James G.; Grechka, Vladimir
2006-07-08
A model study on fractured systems was performed using aconcept that treats isotropic cracked systems as ensembles of crackedgrains by analogy to isotropic polycrystalline elastic media. Theapproach has two advantages: (a) Averaging performed is ensembleaveraging, thus avoiding the criticism legitimately leveled at mosteffective medium theories of quasistatic elastic behavior for crackedmedia based on volume concentrations of inclusions. Since crack effectsare largely independent of the volume they occupy in the composite, sucha non-volume-based method offers an appealingly simple modelingalternative. (b) The second advantage is that both polycrystals andfractured media are stiffer than might otherwise be expected, due tonatural bridging effects ofmore » the strong components. These same effectshave also often been interpreted as crack-crack screening inhigh-crack-density fractured media, but there is no inherent conflictbetween these two interpretations of this phenomenon. Results of thestudy are somewhat mixed. The spread in elastic constants observed in aset of numerical experiments is found to be very comparable to the spreadin values contained between the Reuss and Voigt bounds for thepolycrystal model. However, computed Hashin-Shtrikman bounds are much tootight to be in agreement with the numerical data, showing thatpolycrystals of cracked grains tend to violate some implicit assumptionsof the Hashin-Shtrikman bounding approach. However, the self-consistentestimates obtained for the random polycrystal model are nevertheless verygood estimators of the observed average behavior.« less
Lenhard, R J; Rayner, J L; Davis, G B
2017-10-01
A model is presented to account for elevation-dependent residual and entrapped LNAPL above and below, respectively, the water-saturated zone when predicting subsurface LNAPL specific volume (fluid volume per unit area) and transmissivity from current and historic fluid levels in wells. Physically-based free, residual, and entrapped LNAPL saturation distributions and LNAPL relative permeabilities are integrated over a vertical slice of the subsurface to yield the LNAPL specific volumes and transmissivity. The model accounts for effects of fluctuating water tables. Hypothetical predictions are given for different porous media (loamy sand and clay loam), fluid levels in wells, and historic water-table fluctuations. It is shown the elevation range from the LNAPL-water interface in a well to the upper elevation where the free LNAPL saturation approaches zero is the same for a given LNAPL thickness in a well regardless of porous media type. Further, the LNAPL transmissivity is largely dependent on current fluid levels in wells and not historic levels. Results from the model can aid developing successful LNAPL remediation strategies and improving the design and operation of remedial activities. Results of the model also can aid in accessing the LNAPL recovery technology endpoint, based on the predicted transmissivity. Copyright © 2017 Commonwealth Scientific and Industrial Research Organisation - Copyright 2017. Published by Elsevier B.V. All rights reserved.
Amini, Reza; Kaczka, David W.
2013-01-01
To determine the impact of ventilation frequency, lung volume, and parenchymal stiffness on ventilation distribution, we developed an anatomically-based computational model of the canine lung. Each lobe of the model consists of an asymmetric branching airway network subtended by terminal, viscoelastic acinar units. The model allows for empiric dependencies of airway segment dimensions and parenchymal stiffness on transpulmonary pressure. We simulated the effects of lung volume and parenchymal recoil on global lung impedance and ventilation distribution from 0.1 to 100 Hz, with mean transpulmonary pressures from 5 to 25 cmH2O. With increasing lung volume, the distribution of acinar flows narrowed and became more synchronous for frequencies below resonance. At higher frequencies, large variations in acinar flow were observed. Maximum acinar flow occurred at first antiresonance frequency, where lung impedance achieved a local maximum. The distribution of acinar pressures became very heterogeneous and amplified relative to tracheal pressure at the resonant frequency. These data demonstrate the important interaction between frequency and lung tissue stiffness on the distribution of acinar flows and pressures. These simulations provide useful information for the optimization of frequency, lung volume, and mean airway pressure during conventional ventilation or high frequency oscillation (HFOV). Moreover our model indicates that an optimal HFOV bandwidth exists between the resonant and antiresonant frequencies, for which interregional gas mixing is maximized. PMID:23872936
Exact solutions to model surface and volume charge distributions
NASA Astrophysics Data System (ADS)
Mukhopadhyay, S.; Majumdar, N.; Bhattacharya, P.; Jash, A.; Bhattacharya, D. S.
2016-10-01
Many important problems in several branches of science and technology deal with charges distributed along a line, over a surface and within a volume. Recently, we have made use of new exact analytic solutions of surface charge distributions to develop the nearly exact Boundary Element Method (neBEM) toolkit. This 3D solver has been successful in removing some of the major drawbacks of the otherwise elegant Green's function approach and has been found to be very accurate throughout the computational domain, including near- and far-field regions. Use of truly distributed singularities (in contrast to nodally concentrated ones) on rectangular and right-triangular elements used for discretizing any three-dimensional geometry has essentially removed many of the numerical and physical singularities associated with the conventional BEM. In this work, we will present this toolkit and the development of several numerical models of space charge based on exact closed-form expressions. In one of the models, Particles on Surface (ParSur), the space charge inside a small elemental volume of any arbitrary shape is represented as being smeared on several surfaces representing the volume. From the studies, it can be concluded that the ParSur model is successful in getting the estimates close to those obtained using the first-principles, especially close to and within the cell. In the paper, we will show initial applications of ParSur and other models in problems related to high energy physics.
A dynamical model on deposit and loan of banking: A bifurcation analysis
NASA Astrophysics Data System (ADS)
Sumarti, Novriana; Hasmi, Abrari Noor
2015-09-01
A dynamical model, which is one of sophisticated techniques using mathematical equations, can determine the observed state, for example bank profits, for all future times based on the current state. It will also show small changes in the state of the system create either small or big changes in the future depending on the model. In this research we develop a dynamical system of the form: d/D d t =f (D ,L ,rD,rL,r ), d/L d t =g (D ,L ,rD,rL,r ), Here D and rD are the volume of deposit and its rate, L and rL are the volume of loan and its rate, and r is the interbank market rate. There are parameters required in this model which give connections between two variables or between two derivative functions. In this paper we simulate the model for several parameters values. We do bifurcation analysis on the dynamics of the system in order to identify the appropriate parameters that control the stability behaviour of the system. The result shows that the system will have a limit cycle for small value of interest rate of loan, so the deposit and loan volumes are fluctuating and oscillating extremely. If the interest rate of loan is too high, the loan volume will be decreasing and vanish and the system will converge to its carrying capacity.
Model implementation for dynamic computation of system cost
NASA Astrophysics Data System (ADS)
Levri, J.; Vaccari, D.
The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.
Improved particle position accuracy from off-axis holograms using a Chebyshev model.
Öhman, Johan; Sjödahl, Mikael
2018-01-01
Side scattered light from micrometer-sized particles is recorded using an off-axis digital holographic setup. From holograms, a volume is reconstructed with information about both intensity and phase. Finding particle positions is non-trivial, since poor axial resolution elongates particles in the reconstruction. To overcome this problem, the reconstructed wavefront around a particle is used to find the axial position. The method is based on the change in the sign of the curvature around the true particle position plane. The wavefront curvature is directly linked to the phase response in the reconstruction. In this paper we propose a new method of estimating the curvature based on a parametric model. The model is based on Chebyshev polynomials and is fit to the phase anomaly and compared to a plane wave in the reconstructed volume. From the model coefficients, it is possible to find particle locations. Simulated results show increased performance in the presence of noise, compared to the use of finite difference methods. The standard deviation is decreased from 3-39 μm to 6-10 μm for varying noise levels. Experimental results show a corresponding improvement where the standard deviation is decreased from 18 μm to 13 μm.
Relationship between accident severity and full-scale crash test. Volume II, Appendices
DOT National Transportation Integrated Search
1984-08-01
Available accident files are used to generate a 4l2-accident data base of guardrail impacts. This base is analyzed to develop a statistical model for predicting accident severity index (ASI) as a function of vehicle type or weight, impact speed, and ...
ERIC Educational Resources Information Center
Nemanich, Donald, Ed.
1975-01-01
Articles in this volume of the "Illinois English Bulletin" include "Competencies in Teaching English" by Alan C. Purves, which sets forth a tentative model for planning competency-based instruction and certification based on concepts, teaching acts, skills, and strategies; "Passing the Buck Versus the Teaching of English" by Dennis Q. McInerny,…
Monitoring and modeling of pavement response and performance task B : New York volume 3, I90.
DOT National Transportation Integrated Search
2012-05-01
This research presents the evaluation and comparison of two Portland-cement concrete (PCC) pavement test : sections with cement-treated permeable bases (CTPB) and dense-graded aggregate bases (DGAB) on the Interstate : 90 Thruway in New York. Two ins...
CADDIS Volume 5. Causal Databases: Interactive Conceptual Diagrams (ICDs)
In Interactive Conceptual Diagram (ICD) section of CADDIS allows users to create conceptual model diagrams, search a literature-based evidence database, and then attach that evidence to their diagrams.
Renteria Marquez, I A; Bolborici, V
2017-05-01
This manuscript presents a method to model in detail the piezoelectric traveling wave rotary ultrasonic motor (PTRUSM) stator response under the action of DC and AC voltages. The stator is modeled with a discrete two dimensional system of equations using the finite volume method (FVM). In order to obtain accurate results, a model of the stator bridge is included into the stator model. The model of the stator under the action of DC voltage is presented first, and the results of the model are compared versus a similar model using the commercial finite element software COMSOL Multiphysics. One can observe that there is a difference of less than 5% between the displacements of the stator using the proposed model and the one with COMSOL Multiphysics. After that, the model of the stator under the action of AC voltages is presented. The time domain analysis shows the generation of the traveling wave in the stator surface. One can use this model to accurately calculate the stator surface velocities, elliptical motion of the stator surface and the amplitude and shape of the stator traveling wave. A system of equations discretized with the finite volume method can easily be transformed into electrical circuits, because of that, FVM may be a better choice to develop a model-based control strategy for the PTRUSM. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sheikholeslami, M.; Li, Zhixiong; Shamlooei, M.
2018-06-01
Control volume based finite element method (CVFEM) is applied to simulate H2O based nanofluid radiative and convective heat transfer inside a porous medium. Non-Darcy model is employed for porous media. Influences of Hartmann number, nanofluid volume fraction, radiation parameter, Darcy number, number of undulations and Rayleigh number on nanofluid behavior were demonstrated. Thermal conductivity of nanofluid is estimated by means of previous experimental correlation. Results show that Nusselt number enhances with augment of permeability of porous media. Effect of Hartmann number on rate of heat transfer is opposite of radiation parameter.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-30
.... Moreover, the competitive pressures from other exchanges in electronic orders and different business model... electronic business and compete with other exchanges for such business. The business models surrounding...). The specific volume thresholds of the Program's tiers were set based upon business determinations and...
Edge gradients evaluation for 2D hybrid finite volume method model
USDA-ARS?s Scientific Manuscript database
In this study, a two-dimensional depth-integrated hydrodynamic model was developed using FVM on a hybrid unstructured collocated mesh system. To alleviate the negative effects of mesh irregularity and non-uniformity, a conservative evaluation method for edge gradients based on the second-order Tayl...
Human Parental Care: Universal Goals, Cultural Strategies, Individual Behavior.
ERIC Educational Resources Information Center
LeVine, Robert A.
1988-01-01
A model of parental behavior as adaptation in agrarian and urban-industrial societies is proposed and examined in light of the evidence in this volume. The model is based on the concept of parental investment strategies for allocating time, attention, and domestic resources to raising children. (RH)
Investigations in Science Education. Volume 13, Number 3, 1987.
ERIC Educational Resources Information Center
Blosser, Patricia E., Ed.; Helgeson, Stanley L., Ed.
1987-01-01
Abstracts and abstractors' critiques of six research reports related to preservice teacher education and instruction are presented. Aspects addressed in the studies include: (1) teaching strategy analysis models in middle school science education courses; (2) concerns-based adoption model (CBAM): basis for an elementary science methods course; (3)…