Sample records for process modeling volume

  1. Working Papers in Dialogue Modeling, Volume 2.

    ERIC Educational Resources Information Center

    Mann, William C.; And Others

    The technical working papers that comprise the two volumes of this document are related to the problem of creating a valid process model of human communication in dialogue. In Volume 2, the first paper concerns study methodology, and raises such issues as the choice between system-building and process-building, and the advantages of studying cases…

  2. Framework Programmable Platform for the Advanced Software Development Workstation (FPP/ASDW). Demonstration framework document. Volume 2: Framework process description

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Dewitte, Paula S.; Crump, John W.; Ackley, Keith A.

    1992-01-01

    In the second volume of the Demonstration Framework Document, the graphical representation of the demonstration framework is given. This second document was created to facilitate the reading and comprehension of the demonstration framework. It is designed to be viewed in parallel with Section 4.2 of the first volume to help give a picture of the relationships between the UOB's (Unit of Behavior) of the model. The model is quite large and the design team felt that this form of presentation would make it easier for the reader to get a feel for the processes described in this document. The IDEF3 (Process Description Capture Method) diagrams of the processes of an Information System Development are presented. Volume 1 describes the processes and the agents involved with each process, while this volume graphically shows the precedence relationships among the processes.

  3. An order insertion scheduling model of logistics service supply chain considering capacity and time factors.

    PubMed

    Liu, Weihua; Yang, Yi; Wang, Shuqing; Liu, Yang

    2014-01-01

    Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful.

  4. Development of Parametric Mass and Volume Models for an Aerospace SOFC/Gas Turbine Hybrid System

    NASA Technical Reports Server (NTRS)

    Tornabene, Robert; Wang, Xiao-yen; Steffen, Christopher J., Jr.; Freeh, Joshua E.

    2005-01-01

    In aerospace power systems, mass and volume are key considerations to produce a viable design. The utilization of fuel cells is being studied for a commercial aircraft electrical power unit. Based on preliminary analyses, a SOFC/gas turbine system may be a potential solution. This paper describes the parametric mass and volume models that are used to assess an aerospace hybrid system design. The design tool utilizes input from the thermodynamic system model and produces component sizing, performance, and mass estimates. The software is designed such that the thermodynamic model is linked to the mass and volume model to provide immediate feedback during the design process. It allows for automating an optimization process that accounts for mass and volume in its figure of merit. Each component in the system is modeled with a combination of theoretical and empirical approaches. A description of the assumptions and design analyses is presented.

  5. An Order Insertion Scheduling Model of Logistics Service Supply Chain Considering Capacity and Time Factors

    PubMed Central

    Yang, Yi; Wang, Shuqing; Liu, Yang

    2014-01-01

    Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful. PMID:25276851

  6. PLANNING MODELS FOR URBAN WATER SUPPLY EXPANSION. VOLUME 1. PLANNING FOR THE EXPANSION OF REGIONAL WATER SUPPLY SYSTEMS

    EPA Science Inventory

    A three-volume report was developed relative to the modelling of investment strategies for regional water supply planning. Volume 1 is the study of capacity expansion over time. Models to aid decision making for the deterministic case are presented, and a planning process under u...

  7. Reconciling transport models across scales: The role of volume exclusion

    NASA Astrophysics Data System (ADS)

    Taylor, P. R.; Yates, C. A.; Simpson, M. J.; Baker, R. E.

    2015-10-01

    Diffusive transport is a universal phenomenon, throughout both biological and physical sciences, and models of diffusion are routinely used to interrogate diffusion-driven processes. However, most models neglect to take into account the role of volume exclusion, which can significantly alter diffusive transport, particularly within biological systems where the diffusing particles might occupy a significant fraction of the available space. In this work we use a random walk approach to provide a means to reconcile models that incorporate crowding effects on different spatial scales. Our work demonstrates that coarse-grained models incorporating simplified descriptions of excluded volume can be used in many circumstances, but that care must be taken in pushing the coarse-graining process too far.

  8. A method for simulating the entire leaking process and calculating the liquid leakage volume of a damaged pressurized pipeline.

    PubMed

    He, Guoxi; Liang, Yongtu; Li, Yansong; Wu, Mengyu; Sun, Liying; Xie, Cheng; Li, Feng

    2017-06-15

    The accidental leakage of long-distance pressurized oil pipelines is a major area of risk, capable of causing extensive damage to human health and environment. However, the complexity of the leaking process, with its complex boundary conditions, leads to difficulty in calculating the leakage volume. In this study, the leaking process is divided into 4 stages based on the strength of transient pressure. 3 models are established to calculate the leaking flowrate and volume. First, a negative pressure wave propagation attenuation model is applied to calculate the sizes of orifices. Second, a transient oil leaking model, consisting of continuity, momentum conservation, energy conservation and orifice flow equations, is built to calculate the leakage volume. Third, a steady-state oil leaking model is employed to calculate the leakage after valves and pumps shut down. Moreover, sensitive factors that affect the leak coefficient of orifices and volume are analyzed respectively to determine the most influential one. To validate the numerical simulation, two types of leakage test with different sizes of leakage holes were conducted from Sinopec product pipelines. More validations were carried out by applying commercial software to supplement the experimental insufficiency. Thus, the leaking process under different leaking conditions are described and analyzed. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. CFD Analysis of nanofluid forced convection heat transport in laminar flow through a compact pipe

    NASA Astrophysics Data System (ADS)

    Yu, Kitae; Park, Cheol; Kim, Sedon; Song, Heegun; Jeong, Hyomin

    2017-08-01

    In the present paper, developing laminar forced convection flows were numerically investigated by using water-Al2O3 nano-fluid through a circular compact pipe which has 4.5mm diameter. Each model has a steady state and uniform heat flux (UHF) at the wall. The whole numerical experiments were processed under the Re = 1050 and the nano-fluid models were made by the Alumina volume fraction. A single-phase fluid models were defined through nano-fluid physical and thermal properties calculations, Two-phase model(mixture granular model) were processed in 100nm diameter. The results show that Nusselt number and heat transfer rate are improved as the Al2O3 volume fraction increased. All of the numerical flow simulations are processed by the FLUENT. The results show the increment of thermal transfer from the volume fraction concentration.

  10. Lysine production from methanol at 50 degrees C using Bacillus methanolicus: Modeling volume control, lysine concentration, and productivity using a three-phase continuous simulation.

    PubMed

    Lee, G H; Hur, W; Bremmon, C E; Flickinger, M C

    1996-03-20

    A simulation was developed based on experimental data obtained in a 14-L reactor to predict the growth and L-lysine accumulation kinetics, and change in volume of a large-scale (250-m(3)) Bacillus methanolicus methanol-based process. Homoserine auxotrophs of B. methanolicus MGA3 are unique methylotrophs because of the ability to secrete lysine during aerobic growth and threonine starvation at 50 degrees C. Dissolved methanol (100 mM), pH, dissolved oxygen tension (0.063 atm), and threonine levels were controlled to obtain threonine-limited conditions and high-cell density (25 g dry cell weight/L) in a 14-L reactor. As a fed-batch process, the additions of neat methanol (fed on demand), threonine, and other nutrients cause the volume of the fermentation to increase and the final lysine concentration to decrease. In addition, water produced as a result of methanol metabolism contributes to the increase in the volume of the reactor. A three-phase approach was used to predict the rate of change of culture volume based on carbon dioxide production and methanol consumption. This model was used for the evaluation of volume control strategies to optimize lysine productivity. A constant volume reactor process with variable feeding and continuous removal of broth and cells (VF(cstr)) resulted in higher lysine productivity than a fed-batch process without volume control. This model predicts the variation in productivity of lysine with changes in growth and in specific lysine productivity. Simple modifications of the model allows one to investigate other high-lysine-secreting strains with different growth and lysine productivity characteristics. Strain NOA2#13A5-2 which secretes lysine and other end-products were modeled using both growth and non-growth-associated lysine productivity. A modified version of this model was used to simulate the change in culture volume of another L-lysine producing mutant (NOA2#13A52-8A66) with reduced secretion of end-products. The modified simulation indicated that growth-associated production dominates in strain NOA2#13A52-8A66. (c) 1996 John Wiley & Sons, Inc.

  11. Modeling of turbulent transport as a volume process

    NASA Technical Reports Server (NTRS)

    Jennings, Mark J.; Morel, Thomas

    1987-01-01

    An alternative type of modeling was proposed for the turbulent transport terms in Reynolds-averaged equations. One particular implementation of the model was considered, based on the two-point velocity correlations. The model was found to reproduce the trends but not the magnitude of the nonisotropic behavior of the turbulent transport. Some interesting insights were developed concerning the shape of the contracted two-point correlation volume. This volume is strongly deformed by mean shear from the spherical shape found in unstrained flows. Of particular interest is the finding that the shape is sharply waisted, indicating preferential lines of communication, which should have a direct effect on turbulent transfer and on other processes.

  12. Process cost and facility considerations in the selection of primary cell culture clarification technology.

    PubMed

    Felo, Michael; Christensen, Brandon; Higgins, John

    2013-01-01

    The bioreactor volume delineating the selection of primary clarification technology is not always easily defined. Development of a commercial scale process for the manufacture of therapeutic proteins requires scale-up from a few liters to thousands of liters. While the separation techniques used for protein purification are largely conserved across scales, the separation techniques for primary cell culture clarification vary with scale. Process models were developed to compare monoclonal antibody production costs using two cell culture clarification technologies. One process model was created for cell culture clarification by disc stack centrifugation with depth filtration. A second process model was created for clarification by multi-stage depth filtration. Analyses were performed to examine the influence of bioreactor volume, product titer, depth filter capacity, and facility utilization on overall operating costs. At bioreactor volumes <1,000 L, clarification using multi-stage depth filtration offers cost savings compared to clarification using centrifugation. For bioreactor volumes >5,000 L, clarification using centrifugation followed by depth filtration offers significant cost savings. For bioreactor volumes of ∼ 2,000 L, clarification costs are similar between depth filtration and centrifugation. At this scale, factors including facility utilization, available capital, ease of process development, implementation timelines, and process performance characterization play an important role in clarification technology selection. In the case study presented, a multi-product facility selected multi-stage depth filtration for cell culture clarification at the 500 and 2,000 L scales of operation. Facility implementation timelines, process development activities, equipment commissioning and validation, scale-up effects, and process robustness are examined. © 2013 American Institute of Chemical Engineers.

  13. Studies in astronomical time series analysis. IV - Modeling chaotic and random processes with linear filters

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.

    1990-01-01

    While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.

  14. Mathematical model of the loan portfolio dynamics in the form of Markov chain considering the process of new customers attraction

    NASA Astrophysics Data System (ADS)

    Bozhalkina, Yana

    2017-12-01

    Mathematical model of the loan portfolio structure change in the form of Markov chain is explored. This model considers in one scheme both the process of customers attraction, their selection based on the credit score, and loans repayment. The model describes the structure and volume of the loan portfolio dynamics, which allows to make medium-term forecasts of profitability and risk. Within the model corrective actions of bank management in order to increase lending volumes or to reduce the risk are formalized.

  15. Auto-recognition of surfaces and auto-generation of material removal volume for finishing process

    NASA Astrophysics Data System (ADS)

    Kataraki, Pramod S.; Salman Abu Mansor, Mohd

    2018-03-01

    Auto-recognition of a surface and auto-generation of material removal volumes for the so recognised surfaces has become a need to achieve successful downstream manufacturing activities like automated process planning and scheduling. Few researchers have contributed to generation of material removal volume for a product but resulted in material removal volume discontinuity between two adjacent material removal volumes generated from two adjacent faces that form convex geometry. The need for limitation free material removal volume generation was attempted and an algorithm that automatically recognises computer aided design (CAD) model’s surface and also auto-generate material removal volume for finishing process of the recognised surfaces was developed. The surfaces of CAD model are successfully recognised by the developed algorithm and required material removal volume is obtained. The material removal volume discontinuity limitation that occurred in fewer studies is eliminated.

  16. Does lake size matter? Combining morphology and process modeling to examine the contribution of lake classes to population-scale processes

    USGS Publications Warehouse

    Winslow, Luke A.; Read, Jordan S.; Hanson, Paul C.; Stanley, Emily H.

    2014-01-01

    With lake abundances in the thousands to millions, creating an intuitive understanding of the distribution of morphology and processes in lakes is challenging. To improve researchers’ understanding of large-scale lake processes, we developed a parsimonious mathematical model based on the Pareto distribution to describe the distribution of lake morphology (area, perimeter and volume). While debate continues over which mathematical representation best fits any one distribution of lake morphometric characteristics, we recognize the need for a simple, flexible model to advance understanding of how the interaction between morphometry and function dictates scaling across large populations of lakes. These models make clear the relative contribution of lakes to the total amount of lake surface area, volume, and perimeter. They also highlight the critical thresholds at which total perimeter, area and volume would be evenly distributed across lake size-classes have Pareto slopes of 0.63, 1 and 1.12, respectively. These models of morphology can be used in combination with models of process to create overarching “lake population” level models of process. To illustrate this potential, we combine the model of surface area distribution with a model of carbon mass accumulation rate. We found that even if smaller lakes contribute relatively less to total surface area than larger lakes, the increasing carbon accumulation rate with decreasing lake size is strong enough to bias the distribution of carbon mass accumulation towards smaller lakes. This analytical framework provides a relatively simple approach to upscaling morphology and process that is easily generalizable to other ecosystem processes.

  17. Airport Landside. Volume II. The Airport Landside Simulation Model (ALSIM) Description and Users Guide.

    DOT National Transportation Integrated Search

    1982-06-01

    This volume provides a general description of the Airport Landside Simulation Model. A summary of simulated passenger and vehicular processing through the landside is presented. Program operating characteristics and assumptions are documented and a c...

  18. Modeling of hot-mix asphalt compaction : a thermodynamics-based compressible viscoelastic model

    DOT National Transportation Integrated Search

    2010-12-01

    Compaction is the process of reducing the volume of hot-mix asphalt (HMA) by the application of external forces. As a result of compaction, the volume of air voids decreases, aggregate interlock increases, and interparticle friction increases. The qu...

  19. Application of a Model for Quenching and Partitioning in Hot Stamping of High-Strength Steel

    NASA Astrophysics Data System (ADS)

    Zhu, Bin; Liu, Zhuang; Wang, Yanan; Rolfe, Bernard; Wang, Liang; Zhang, Yisheng

    2018-04-01

    Application of quenching and partitioning process in hot stamping has proven to be an effective method to improve the plasticity of advanced high-strength steels (AHSSs). In this study, the hot stamping and partitioning process of advanced high-strength steel 30CrMnSi2Nb is investigated with a hot stamping mold. Given the specific partitioning time and temperature, the influence of quenching temperature on the volume fraction of microstructure evolution and mechanical properties of the above steel are studied in detail. In addition, a model for quenching and partitioning process is applied to predict the carbon diffusion and interface migration during partitioning, which determines the retained austenite volume fraction and final properties of the part. The predicted trends of the retained austenite volume fraction agree with the experimental results. In both cases, the volume fraction of retained austenite increases first and then decreases with the increasing quenching temperature. The optimal quenching temperature is approximately 290 °C for 30CrMnSi2Nb with the partition conditions of 425 °C and 20 seconds. It is suggested that the model can be used to help determine the process parameters to obtain retained austenite as much as possible.

  20. How large is the typical subarachnoid hemorrhage? A review of current neurosurgical knowledge.

    PubMed

    Whitmore, Robert G; Grant, Ryan A; LeRoux, Peter; El-Falaki, Omar; Stein, Sherman C

    2012-01-01

    Despite the morbidity and mortality of subarachnoid hemorrhage (SAH), the average volume of a typical hemorrhage is not well defined. Animal models of SAH often do not accurately mimic the human disease process. The purpose of this study is to estimate the average SAH volume, allowing standardization of animal models of the disease. We performed a MEDLINE search of SAH volume and erythrocyte counts in human cerebrospinal fluid as well as for volumes of blood used in animal injection models of SAH, from 1956 to 2010. We polled members of the American Association of Neurological Surgeons (AANS) for estimates of typical SAH volume. Using quantitative data from the literature, we calculated the total volume of SAH as equal to the volume of blood clotted in basal cisterns plus the volume of dispersed blood in cerebrospinal fluid. The results of the AANS poll confirmed our estimates. The human literature yielded 322 publications and animal literature, 237 studies. Four quantitative human studies reported blood clot volumes ranging from 0.2 to 170 mL, with a mean of ∼20 mL. There was only one quantitative study reporting cerebrospinal fluid red blood cell counts from serial lumbar puncture after SAH. Dispersed blood volume ranged from 2.9 to 45.9 mL, and we used the mean of 15 mL for our calculation. Therefore, total volume of SAH equals 35 mL. The AANS poll yielded 176 responses, ranging from 2 to 350 mL, with a mean of 33.9 ± 4.4 mL. Based on our estimate of total SAH volume of 35 mL, animal injection models may now become standardized for more accurate portrayal of the human disease process. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Effects of stiffness and volume on the transit time of an erythrocyte through a slit.

    PubMed

    Salehyar, Sara; Zhu, Qiang

    2017-06-01

    By using a fully coupled fluid-cell interaction model, we numerically simulate the dynamic process of a red blood cell passing through a slit driven by an incoming flow. The model is achieved by combining a multiscale model of the composite cell membrane with a boundary element fluid dynamics model based on the Stokes flow assumption. Our concentration is on the correlation between the transit time (the time it takes to finish the whole translocation process) and different conditions (flow speed, cell orientation, cell stiffness, cell volume, etc.) that are involved. According to the numerical prediction (with some exceptions), the transit time rises as the cell is stiffened. It is also highly sensitive to volume increase inside the cell. In general, even slightly swollen cells (i.e., the internal volume is increased while the surface area of the cell kept unchanged) travel dramatically slower through the slit. For these cells, there is also an increased chance of blockage.

  2. Application of Compressible Volume of Fluid Model in Simulating the Impact and Solidification of Hollow Spherical ZrO2 Droplet on a Surface

    NASA Astrophysics Data System (ADS)

    Safaei, Hadi; Emami, Mohsen Davazdah; Jazi, Hamidreza Salimi; Mostaghimi, Javad

    2017-12-01

    Applications of hollow spherical particles in thermal spraying process have been developed in recent years, accompanied by attempts in the form of experimental and numerical studies to better understand the process of impact of a hollow droplet on a surface. During such process, volume and density of the trapped gas inside droplet change. The numerical models should be able to simulate such changes and their consequent effects. The aim of this study is to numerically simulate the impact of a hollow ZrO2 droplet on a flat surface using the volume of fluid technique for compressible flows. An open-source, finite-volume-based CFD code was used to perform the simulations, where appropriate subprograms were added to handle the studied cases. Simulation results were compared with the available experimental data. Results showed that at high impact velocities ( U 0 > 100 m/s), the compression of trapped gas inside droplet played a significant role in the impact dynamics. In such velocities, the droplet splashed explosively. Compressibility effects result in a more porous splat, compared to the corresponding incompressible model. Moreover, the compressible model predicted a higher spread factor than the incompressible model, due to planetary structure of the splat.

  3. Initial stage of nucleation-mediated crystallization of a supercooled melt

    NASA Astrophysics Data System (ADS)

    Chernov, A. A.; Pil'nik, A. A.; Islamov, D. R.

    2016-09-01

    The kinetic model of nucleation-mediated crystallization of a supercooled melt is presented in this work. It correctly takes into account the change in supercooling of the initial phase in the process of formation and evolution of a new phase. The model makes it possible to find the characteristic time of the process, time course of the crystal phase volume, solidified material microstructure. The distinctive feature of the model is the use of the "forbidden" zones in the volume where the formation of new nucleation centers is suppressed.

  4. Efficient gaussian density formulation of volume and surface areas of macromolecules on graphical processing units.

    PubMed

    Zhang, Baofeng; Kilburg, Denise; Eastman, Peter; Pande, Vijay S; Gallicchio, Emilio

    2017-04-15

    We present an algorithm to efficiently compute accurate volumes and surface areas of macromolecules on graphical processing unit (GPU) devices using an analytic model which represents atomic volumes by continuous Gaussian densities. The volume of the molecule is expressed by means of the inclusion-exclusion formula, which is based on the summation of overlap integrals among multiple atomic densities. The surface area of the molecule is obtained by differentiation of the molecular volume with respect to atomic radii. The many-body nature of the model makes a port to GPU devices challenging. To our knowledge, this is the first reported full implementation of this model on GPU hardware. To accomplish this, we have used recursive strategies to construct the tree of overlaps and to accumulate volumes and their gradients on the tree data structures so as to minimize memory contention. The algorithm is used in the formulation of a surface area-based non-polar implicit solvent model implemented as an open source plug-in (named GaussVol) for the popular OpenMM library for molecular mechanics modeling. GaussVol is 50 to 100 times faster than our best optimized implementation for the CPUs, achieving speeds in excess of 100 ns/day with 1 fs time-step for protein-sized systems on commodity GPUs. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. Kernel Regression Estimation of Fiber Orientation Mixtures in Diffusion MRI

    PubMed Central

    Cabeen, Ryan P.; Bastin, Mark E.; Laidlaw, David H.

    2016-01-01

    We present and evaluate a method for kernel regression estimation of fiber orientations and associated volume fractions for diffusion MR tractography and population-based atlas construction in clinical imaging studies of brain white matter. This is a model-based image processing technique in which representative fiber models are estimated from collections of component fiber models in model-valued image data. This extends prior work in nonparametric image processing and multi-compartment processing to provide computational tools for image interpolation, smoothing, and fusion with fiber orientation mixtures. In contrast to related work on multi-compartment processing, this approach is based on directional measures of divergence and includes data-adaptive extensions for model selection and bilateral filtering. This is useful for reconstructing complex anatomical features in clinical datasets analyzed with the ball-and-sticks model, and our framework’s data-adaptive extensions are potentially useful for general multi-compartment image processing. We experimentally evaluate our approach with both synthetic data from computational phantoms and in vivo clinical data from human subjects. With synthetic data experiments, we evaluate performance based on errors in fiber orientation, volume fraction, compartment count, and tractography-based connectivity. With in vivo data experiments, we first show improved scan-rescan reproducibility and reliability of quantitative fiber bundle metrics, including mean length, volume, streamline count, and mean volume fraction. We then demonstrate the creation of a multi-fiber tractography atlas from a population of 80 human subjects. In comparison to single tensor atlasing, our multi-fiber atlas shows more complete features of known fiber bundles and includes reconstructions of the lateral projections of the corpus callosum and complex fronto-parietal connections of the superior longitudinal fasciculus I, II, and III. PMID:26691524

  6. Three Dimensional Projection Environment for Molecular Design and Surgical Simulation

    DTIC Science & Technology

    2011-08-01

    bypasses the cumbersome meshing process . The deformation model is only comprised of mass nodes, which are generated by sampling the object volume before...force should minimize the penetration volume, the haptic feedback force is derived directly. Additionally, a post- processing technique is developed to...render distinct physi-cal tissue properties across different interaction areas. The proposed approach does not require any pre- processing and is

  7. [Modeling and analysis of volume conduction based on field-circuit coupling].

    PubMed

    Tang, Zhide; Liu, Hailong; Xie, Xiaohui; Chen, Xiufa; Hou, Deming

    2012-08-01

    Numerical simulations of volume conduction can be used to analyze the process of energy transfer and explore the effects of some physical factors on energy transfer efficiency. We analyzed the 3D quasi-static electric field by the finite element method, and developed A 3D coupled field-circuit model of volume conduction basing on the coupling between the circuit and the electric field. The model includes a circuit simulation of the volume conduction to provide direct theoretical guidance for energy transfer optimization design. A field-circuit coupling model with circular cylinder electrodes was established on the platform of the software FEM3.5. Based on this, the effects of electrode cross section area, electrode distance and circuit parameters on the performance of volume conduction system were obtained, which provided a basis for optimized design of energy transfer efficiency.

  8. System Engineering Concept Demonstration, Process Model. Volume 3

    DTIC Science & Technology

    1992-12-01

    Process or Process Model The System Engineering process must be the enactment of the aforementioned definitions. Therefore, a process is an enactment of a...Prototype Tradeoff Scenario demonstrates six levels of abstraction in the Process Model. The Process Model symbology is explained within the "Help" icon ...dnofing no- ubeq t"vidi e /hn -am-a. lmi IzyuO ..pu Row _e._n au"c.ue-w’ ’- anuiildyidwile b ie htplup ?~imsav D symbo ,,ue,.dvu ,,dienl Flw s--..,fu..I

  9. Statistical self-similarity of hotspot seamount volumes modeled as self-similar criticality

    USGS Publications Warehouse

    Tebbens, S.F.; Burroughs, S.M.; Barton, C.C.; Naar, D.F.

    2001-01-01

    The processes responsible for hotspot seamount formation are complex, yet the cumulative frequency-volume distribution of hotspot seamounts in the Easter Island/Salas y Gomez Chain (ESC) is found to be well-described by an upper-truncated power law. We develop a model for hotspot seamount formation where uniform energy input produces events initiated on a self-similar distribution of critical cells. We call this model Self-Similar Criticality (SSC). By allowing the spatial distribution of magma migration to be self-similar, the SSC model recreates the observed ESC seamount volume distribution. The SSC model may have broad applicability to other natural systems.

  10. Column Testing and 1D Reactive Transport Modeling to Evaluate Uranium Plume Persistence Processes

    NASA Astrophysics Data System (ADS)

    Johnson, R. H.; Morrison, S.; Morris, S.; Tigar, A.; Dam, W. L.; Dayvault, J.

    2015-12-01

    At many U.S. Department of Energy Office of Legacy Management sites, 100 year natural flushing was selected as a remedial option for groundwater uranium plumes. However, current data indicate that natural flushing is not occurring as quickly as expected and solid-phase and aqueous uranium concentrations are persistent. At the Grand Junction, Colorado office site, column testing was completed on core collected below an area where uranium mill tailings have been removed. The total uranium concentration in this core was 13.2 mg/kg and the column was flushed with laboratory-created water with no uranium and chemistry similar to the nearby Gunnison River. The core was flushed for a total of 91 pore volumes producing a maximum effluent uranium concentration of 6,110 μg/L at 2.1 pore volumes and a minimum uranium concentration of 36.2 μg/L at the final pore volume. These results indicate complex geochemical reactions at small pore volumes and a long tailing affect at greater pore volumes. Stop flow data indicate the occurrence of non-equilibrium processes that create uranium concentration rebound. These data confirm the potential for plume persistence, which is occurring at the field scale. 1D reactive transport modeling was completed using PHREEQC (geochemical model) and calibrated to the column test data manually and using PEST (inverse modeling calibration routine). Processes of sorption, dual porosity with diffusion, mineral dissolution, dispersion, and cation exchange were evaluated separately and in combination. The calibration results indicate that sorption and dual porosity are major processes in explaining the column test data. These processes are also supported by fission track photographs that show solid-phase uranium residing in less mobile pore spaces. These procedures provide valuable information on plume persistence and secondary source processes that may be used to better inform and evaluate remedial strategies, including natural flushing.

  11. Competition between reaction-induced expansion and creep compaction during gypsum formation: Experimental and numerical investigation

    NASA Astrophysics Data System (ADS)

    Skarbek, R. M.; Savage, H. M.; Spiegelman, M. W.; Kelemen, P. B.; Yancopoulos, D.

    2017-12-01

    Deformation and cracking caused by reaction-driven volume increase is an important process in many geological settings, however the conditions controlling these processes are poorly understood. The interaction of rocks with reactive fluids can change permeability and reactive surface area, leading to a large variety of feedbacks. Gypsum is an ideal material to study these processes. It forms rapidly at room temperature via bassanite hydration, and is commonly used as an analogue for rocks in high-temperature, high-pressure conditions. We conducted uniaxial strain experiments to study the effects of applied axial load on deformation and fluid flow during the formation of gypsum from bassanite. While hydration of bassanite to gypsum involves a solid volume increase, gypsum exhibits significant creep compaction when in contact with water. These two volume changing processes occur simultaneously during fluid flow through bassanite. We cold-pressed bassanite powder to form cylinders 2.5 cm in height and 1.2 cm in diameter. Samples were compressed with a static axial load of 0.01 to 4 MPa. Water infiltrated initially unsaturated samples through the bottom face and the height of the samples was recorded as a measure of the total volume change. We also performed experiments on pure gypsum samples to constrain the amount of creep observed in tests on bassanite hydration. At axial loads < 0.15 MPa, volume increase due to the reaction dominates and samples exhibit monotonic expansion. At loads > 1 MPa, creep in the gypsum dominates and samples exhibit monotonic compaction. At intermediate loads, samples exhibit alternating phases of compaction and expansion due to the interplay of the two volume changing processes. We observed a change from net compaction to net expansion at an axial load of 0.250 MPa. We explain this behavior with a simple model that predicts the strain evolution, but does not take fluid flow into account. We also implement a 1D poro-visco-elastic model of the imbibition process that includes the reaction and gypsum creep. We use the results of these models, with models of the creep rate in gypsum, to estimate the temperature dependence of the axial load where total strain transitions from compaction to expansion. Our results have implications for the depth dependence of reaction induced volume changes in the Earth.

  12. Solute induced relaxation in glassy polymers: Experimental measurements and nonequilibrium thermodynamic model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minelli, Matteo; Doghieri, Ferruccio

    2014-05-15

    Data for kinetics of mass uptake from vapor sorption experiments in thin glassy polymer samples are here interpreted in terms of relaxation times for volume dilation. To this result, both models from non-equilibrium thermodynamics and from mechanics of volume relaxation contribute. Different kind of sorption experiments have been considered in order to facilitate the direct comparison between kinetics of solute induced volume dilation and corresponding data from process driven by pressure or temperature jumps.

  13. The Volume Field Model about Strong Interaction and Weak Interaction

    NASA Astrophysics Data System (ADS)

    Liu, Rongwu

    2016-03-01

    For a long time researchers have believed that strong interaction and weak interaction are realized by exchanging intermediate particles. This article proposes a new mechanism as follows: Volume field is a form of material existence in plane space, it takes volume-changing motion in the form of non-continuous motion, volume fields have strong interaction or weak interaction between them by overlapping their volume fields. Based on these concepts, this article further proposes a ``bag model'' of volume field for atomic nucleus, which includes three sub-models of the complex structure of fundamental body (such as quark), the atom-like structure of hadron, and the molecule-like structure of atomic nucleus. This article also proposes a plane space model and formulates a physics model of volume field in the plane space, as well as a model of space-time conversion. The model of space-time conversion suggests that: Point space-time and plane space-time convert each other by means of merging and rupture respectively, the essence of space-time conversion is the mutual transformations of matter and energy respectively; the process of collision of high energy hadrons, the formation of black hole, and the Big Bang of universe are three kinds of space-time conversions.

  14. Stereolithographic volume evaluation of healing and shaping after rhinoplasty operations.

    PubMed

    Tatlidede, Soner; Turgut, Gürsel; Gönen, Emre; Kayali, Mahmut Ulvi; Baş, Lütfü

    2009-07-01

    Nasal edema and volume changes are unavoidable processes during the healing period after rhinoplasty. Various applications were reported regarding the prevention of early edema; however, the literature shows no study focused on the course of the nasal edema and volume changes up-to-date. We aimed to study the nasal volume changes during the first year of postoperative healing period and to form a recovery and volume change diagram with the obtained data. We prepared standard frames and nasal molds of 7 rhinoplasty patients at regular time intervals (preoperative period and at the postoperative 1st, 2nd, 4th, 8th, 12th, 24th, and 52nd weeks). Plaster nasal models were created by using these molds. Volumes of models were measured by computed tomographic scanning and three-dimensional image processing programs. According to our results, the nasal edema reaches its maximum level at the postoperative fourth week and then rapidly decreases until its minimum level at the eighth week. In contrast with the general opinion, the nasal volume begins to increase smoothly reaching to a level minimally below the preoperative value by the end of the first year.

  15. Formation of Cadmium-Sulfide Nanowhiskers via Vacuum Evaporation and Condensation in a Quasi-Closed Volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belyaev, A. P., E-mail: Alexei.Belyaev@pharminnotech.com; Antipov, V. V.; Rubets, V. P.

    Structural and technological studies of processes in which cadmium-sulfide nanowhiskers are synthesized in a quasi-closed volume by the method of vacuum evaporation and condensation are reported. It is demonstrated that the processes are in agreement with the classical vapor–liquid–crystal model. Micrographs of the objects in different formation stages are presented.

  16. A Study on Micropipetting Detection Technology of Automatic Enzyme Immunoassay Analyzer.

    PubMed

    Shang, Zhiwu; Zhou, Xiangping; Li, Cheng; Tsai, Sang-Bing

    2018-04-10

    In order to improve the accuracy and reliability of micropipetting, a method of micro-pipette detection and calibration combining the dynamic pressure monitoring in pipetting process and quantitative identification of pipette volume in image processing was proposed. Firstly, the normalized pressure model for the pipetting process was established with the kinematic model of the pipetting operation, and the pressure model is corrected by the experimental method. Through the pipetting process pressure and pressure of the first derivative of real-time monitoring, the use of segmentation of the double threshold method as pipetting fault evaluation criteria, and the pressure sensor data are processed by Kalman filtering, the accuracy of fault diagnosis is improved. When there is a fault, the pipette tip image is collected through the camera, extract the boundary of the liquid region by the background contrast method, and obtain the liquid volume in the tip according to the geometric characteristics of the pipette tip. The pipette deviation feedback to the automatic pipetting module and deviation correction is carried out. The titration test results show that the combination of the segmented pipetting kinematic model of the double threshold method of pressure monitoring, can effectively real-time judgment and classification of the pipette fault. The method of closed-loop adjustment of pipetting volume can effectively improve the accuracy and reliability of the pipetting system.

  17. Examination of simplified travel demand model. [Internal volume forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.L. Jr.; McFarlane, W.J.

    1978-01-01

    A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less

  18. Simulating double-peak hydrographs from single storms over mixed-use watersheds

    Treesearch

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2015-01-01

    Two-peak hydrographs after a single rain event are observed in watersheds and storms with distinct volumes contributing as fast and slow runoff. The authors developed a hydrograph model able to quantify these separate runoff volumes to help in estimation of runoff processes and residence times used by watershed managers. The model uses parallel application of two...

  19. Including hydrological self-regulating processes in peatland models: Effects on peatmoss drought projections.

    PubMed

    Nijp, Jelmer J; Metselaar, Klaas; Limpens, Juul; Teutschbein, Claudia; Peichl, Matthias; Nilsson, Mats B; Berendse, Frank; van der Zee, Sjoerd E A T M

    2017-02-15

    The water content of the topsoil is one of the key factors controlling biogeochemical processes, greenhouse gas emissions and biosphere - atmosphere interactions in many ecosystems, particularly in northern peatlands. In these wetland ecosystems, the water content of the photosynthetic active peatmoss layer is crucial for ecosystem functioning and carbon sequestration, and is sensitive to future shifts in rainfall and drought characteristics. Current peatland models differ in the degree in which hydrological feedbacks are included, but how this affects peatmoss drought projections is unknown. The aim of this paper was to systematically test whether the level of hydrological detail in models could bias projections of water content and drought stress for peatmoss in northern peatlands using downscaled projections for rainfall and potential evapotranspiration in the current (1991-2020) and future climate (2061-2090). We considered four model variants that either include or exclude moss (rain)water storage and peat volume change, as these are two central processes in the hydrological self-regulation of peatmoss carpets. Model performance was validated using field data of a peatland in northern Sweden. Including moss water storage as well as peat volume change resulted in a significant improvement of model performance, despite the extra parameters added. The best performance was achieved if both processes were included. Including moss water storage and peat volume change consistently reduced projected peatmoss drought frequency with >50%, relative to the model excluding both processes. Projected peatmoss drought frequency in the growing season was 17% smaller under future climate than current climate, but was unaffected by including the hydrological self-regulating processes. Our results suggest that ignoring these two fine-scale processes important in hydrological self-regulation of northern peatlands will have large consequences for projected climate change impact on ecosystem processes related to topsoil water content, such as greenhouse gas emissions. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Modeling the Hydrologic Processes of a Permeable Pavement System

    EPA Science Inventory

    A permeable pavement system can capture stormwater to reduce runoff volume and flow rate, improve onsite groundwater recharge, and enhance pollutant controls within the site. A new unit process model for evaluating the hydrologic performance of a permeable pavement system has be...

  1. CrossTalk. The Journal of Defense Software Engineering. Volume 25, Number 3

    DTIC Science & Technology

    2012-06-01

    OMG) standard Business Process Modeling and Nota- tion ( BPMN ) [6] graphical notation. I will address each of these: identify and document steps...to a value stream map using BPMN and textual process narratives. The resulting process narratives or process metadata includes key information...objectives. Once the processes are identified we can graphically document them capturing the process using BPMN (see Figure 1). The BPMN models

  2. The volume-mortality relation for radical cystectomy in England: retrospective analysis of hospital episode statistics

    PubMed Central

    Bottle, Alex; Darzi, Ara W; Athanasiou, Thanos; Vale, Justin A

    2010-01-01

    Objectives To investigate the relation between volume and mortality after adjustment for case mix for radical cystectomy in the English healthcare setting using improved statistical methodology, taking into account the institutional and surgeon volume effects and institutional structural and process of care factors. Design Retrospective analysis of hospital episode statistics using multilevel modelling. Setting English hospitals carrying out radical cystectomy in the seven financial years 2000/1 to 2006/7. Participants Patients with a primary diagnosis of cancer undergoing an inpatient elective cystectomy. Main outcome measure Mortality within 30 days of cystectomy. Results Compared with low volume institutions, medium volume ones had a significantly higher odds of in-hospital and total mortality: odds ratio 1.72 (95% confidence interval 1.00 to 2.98, P=0.05) and 1.82 (1.08 to 3.06, P=0.02). This was only seen in the final model, which included adjustment for structural and processes of care factors. The surgeon volume-mortality relation showed weak evidence of reduced odds of in-hospital mortality (by 35%) for the high volume surgeons, although this did not reach statistical significance at the 5% level. Conclusions The relation between case volume and mortality after radical cystectomy for bladder cancer became evident only after adjustment for structural and process of care factors, including staffing levels of nurses and junior doctors, in addition to case mix. At least for this relatively uncommon procedure, adjusting for these confounders when examining the volume-outcome relation is critical before considering centralisation of care to a few specialist institutions. Outcomes other than mortality, such as functional morbidity and disease recurrence may ultimately influence towards centralising care. PMID:20305302

  3. A new model of reaction-driven cracking: fluid volume consumption and tensile failure during serpentinization

    NASA Astrophysics Data System (ADS)

    Eichenbaum-Pikser, J. M.; Spiegelman, M. W.; Kelemen, P. B.; Wilson, C. R.

    2013-12-01

    Reactive fluid flow plays an important role in a wide range of geodynamic processes, such as melt migration, formation of hydrous minerals on fault surfaces, and chemical weathering. These processes are governed by the complex coupling between fluid transport, reaction, and solid deformation. Reaction-driven cracking is a potentially critical feedback mechanism, by which volume change associated with chemical reaction drives fracture in the surrounding rock. It has been proposed to play a role in both serpentinization and carbonation of peridotite, motivating consideration of its application to mineral carbon sequestration. Previous studies of reactive cracking have focused on the increase in solid volume, and as such, have considered failure in compression. However, if the consumption of fluid is considered in the overall volume budget, the reaction can be net volume reducing, potentially leading to failure in tension. To explore these problems, we have formulated and solved a 2-D model of coupled porous flow, reaction kinetics, and elastic deformation using the finite element model assembler TerraFERMA (Wilson et al, G3 2013 submitted). The model is applied to the serpentinization of peridotite, which can be reasonably approximated as the transfer of a single reactive component (H2O) between fluid and solid phases, making it a simple test case to explore the process. The behavior of the system is controlled by the competition between the rate of volume consumption by the reaction, and the rate of volume replacement by fluid transport, as characterized by a nondimensional parameter χ, which depends on permeability, reaction rate, and the bulk modulus of the solid. Large values of χ correspond to fast fluid transport relative to reaction rate, resulting in a low stress, volume replacing regime. At smaller values of χ, fluid transport cannot keep up with the reaction, resulting in pore fluid under-pressure and tensile solid stresses. For the range of χ relevant to the serpentinization of peridotite, these stresses can reach hundreds of MPa, exceeding the tensile strength of peridotite.

  4. Quantifying Standing Dead Tree Volume and Structural Loss with Voxelized Terrestrial Lidar Data

    NASA Astrophysics Data System (ADS)

    Popescu, S. C.; Putman, E.

    2017-12-01

    Standing dead trees (SDTs) are an important forest component and impact a variety of ecosystem processes, yet the carbon pool dynamics of SDTs are poorly constrained in terrestrial carbon cycling models. The ability to model wood decay and carbon cycling in relation to detectable changes in tree structure and volume over time would greatly improve such models. The overall objective of this study was to provide automated aboveground volume estimates of SDTs and automated procedures to detect, quantify, and characterize structural losses over time with terrestrial lidar data. The specific objectives of this study were: 1) develop an automated SDT volume estimation algorithm providing accurate volume estimates for trees scanned in dense forests; 2) develop an automated change detection methodology to accurately detect and quantify SDT structural loss between subsequent terrestrial lidar observations; and 3) characterize the structural loss rates of pine and oak SDTs in southeastern Texas. A voxel-based volume estimation algorithm, "TreeVolX", was developed and incorporates several methods designed to robustly process point clouds of varying quality levels. The algorithm operates on horizontal voxel slices by segmenting the slice into distinct branch or stem sections then applying an adaptive contour interpolation and interior filling process to create solid reconstructed tree models (RTMs). TreeVolX estimated large and small branch volume with an RMSE of 7.3% and 13.8%, respectively. A voxel-based change detection methodology was developed to accurately detect and quantify structural losses and incorporated several methods to mitigate the challenges presented by shifting tree and branch positions as SDT decay progresses. The volume and structural loss of 29 SDTs, composed of Pinus taeda and Quercus stellata, were successfully estimated using multitemporal terrestrial lidar observations over elapsed times ranging from 71 - 753 days. Pine and oak structural loss rates were characterized by estimating the amount of volumetric loss occurring in 20 equal-interval height bins of each SDT. Results showed that large pine snags exhibited more rapid structural loss in comparison to medium-sized oak snags in this study.

  5. A Novel Modelling Approach for Predicting Forest Growth and Yield under Climate Change.

    PubMed

    Ashraf, M Irfan; Meng, Fan-Rui; Bourque, Charles P-A; MacLean, David A

    2015-01-01

    Global climate is changing due to increasing anthropogenic emissions of greenhouse gases. Forest managers need growth and yield models that can be used to predict future forest dynamics during the transition period of present-day forests under a changing climatic regime. In this study, we developed a forest growth and yield model that can be used to predict individual-tree growth under current and projected future climatic conditions. The model was constructed by integrating historical tree growth records with predictions from an ecological process-based model using neural networks. The new model predicts basal area (BA) and volume growth for individual trees in pure or mixed species forests. For model development, tree-growth data under current climatic conditions were obtained using over 3000 permanent sample plots from the Province of Nova Scotia, Canada. Data to reflect tree growth under a changing climatic regime were projected with JABOWA-3 (an ecological process-based model). Model validation with designated data produced model efficiencies of 0.82 and 0.89 in predicting individual-tree BA and volume growth. Model efficiency is a relative index of model performance, where 1 indicates an ideal fit, while values lower than zero means the predictions are no better than the average of the observations. Overall mean prediction error (BIAS) of basal area and volume growth predictions was nominal (i.e., for BA: -0.0177 cm(2) 5-year(-1) and volume: 0.0008 m(3) 5-year(-1)). Model variability described by root mean squared error (RMSE) in basal area prediction was 40.53 cm(2) 5-year(-1) and 0.0393 m(3) 5-year(-1) in volume prediction. The new modelling approach has potential to reduce uncertainties in growth and yield predictions under different climate change scenarios. This novel approach provides an avenue for forest managers to generate required information for the management of forests in transitional periods of climate change. Artificial intelligence technology has substantial potential in forest modelling.

  6. A Novel Modelling Approach for Predicting Forest Growth and Yield under Climate Change

    PubMed Central

    Ashraf, M. Irfan; Meng, Fan-Rui; Bourque, Charles P.-A.; MacLean, David A.

    2015-01-01

    Global climate is changing due to increasing anthropogenic emissions of greenhouse gases. Forest managers need growth and yield models that can be used to predict future forest dynamics during the transition period of present-day forests under a changing climatic regime. In this study, we developed a forest growth and yield model that can be used to predict individual-tree growth under current and projected future climatic conditions. The model was constructed by integrating historical tree growth records with predictions from an ecological process-based model using neural networks. The new model predicts basal area (BA) and volume growth for individual trees in pure or mixed species forests. For model development, tree-growth data under current climatic conditions were obtained using over 3000 permanent sample plots from the Province of Nova Scotia, Canada. Data to reflect tree growth under a changing climatic regime were projected with JABOWA-3 (an ecological process-based model). Model validation with designated data produced model efficiencies of 0.82 and 0.89 in predicting individual-tree BA and volume growth. Model efficiency is a relative index of model performance, where 1 indicates an ideal fit, while values lower than zero means the predictions are no better than the average of the observations. Overall mean prediction error (BIAS) of basal area and volume growth predictions was nominal (i.e., for BA: -0.0177 cm2 5-year-1 and volume: 0.0008 m3 5-year-1). Model variability described by root mean squared error (RMSE) in basal area prediction was 40.53 cm2 5-year-1 and 0.0393 m3 5-year-1 in volume prediction. The new modelling approach has potential to reduce uncertainties in growth and yield predictions under different climate change scenarios. This novel approach provides an avenue for forest managers to generate required information for the management of forests in transitional periods of climate change. Artificial intelligence technology has substantial potential in forest modelling. PMID:26173081

  7. A negative association between brainstem pontine grey-matter volume, well-being and resilience in healthy twins.

    PubMed

    Gatt, Justine M; Burton, Karen L O; Routledge, Kylie M; Grasby, Katrina L; Korgaonkar, Mayuresh S; Grieve, Stuart M; Schofield, Peter R; Harris, Anthony W F; Clark, C Richard; Williams, Leanne M

    2018-06-20

    Associations between well-being, resilience to trauma and the volume of grey-matter regions involved in affective processing (e.g., threat/reward circuits) are largely unexplored, as are the roles of shared genetic and environmental factors derived from multivariate twin modelling. This study presents, to our knowledge, the first exploration of well-being and volumes of grey-matter regions involved in affective processing using a region-of-interest, voxel-based approach in 263 healthy adult twins (60% monozygotic pairs, 61% females, mean age 39.69 yr). To examine patterns for resilience (i.e., positive adaptation following adversity), we evaluated associations between the same brain regions and well-being in a trauma-exposed subgroup. We found a correlated effect between increased well-being and reduced grey-matter volume of the pontine nuclei. This association was strongest for individuals with higher resilience to trauma. Multivariate twin modelling suggested that the common variance between the pons volume and well-being scores was due to environmental factors. We used a cross-sectional sample; results need to be replicated longitudinally and in a larger sample. Associations with altered grey matter of the pontine nuclei suggest that basic sensory processes, such as arousal, startle, memory consolidation and/or emotional conditioning, may have a role in well-being and resilience.

  8. Model-based segmentation in orbital volume measurement with cone beam computed tomography and evaluation against current concepts.

    PubMed

    Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald

    2016-01-01

    Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.

  9. Organization Domain Modeling. Volume 1. Conceptual Foundations, Process and Workproduct Description

    DTIC Science & Technology

    1993-07-31

    J.A. Hess, W.E. Novak, and A.S. Peterson. Feature-Oriented Domain Analysis ( FODA ) Feasibility Study. Technical Report CMU/SEI-90-TR-21, Software...domain analysis (DA) and modeling, including a structured set of workproducts, a tailorable process model and a set of modeling techniques and guidelines...23 5.3.1 U sability Analysis (Rescoping) ..................................................... 24

  10. Kinetics of the mechanochemical synthesis of alkaline-earth metal amides

    NASA Astrophysics Data System (ADS)

    Garroni, Sebastiano; Takacs, Laszlo; Leng, Haiyan; Delogu, Francesco

    2014-07-01

    A phenomenological framework is developed to model the kinetics of the formation of alkaline-earth metal amides by the ball milling induced reaction of their hydrides with gaseous ammonia. It is shown that the exponential character of the kinetic curves is modulated by the increase of the total volume of the powder inside the reactor due to the substantially larger molar volume of the products compared to the reactants. It is claimed that the volume of powder effectively processed during each collision connects the transformation rate to the physical and chemical processes underlying the mechanochemical transformations.

  11. Development of a Process Signature for Manufacturing Processes with Thermal Loads

    NASA Astrophysics Data System (ADS)

    Frerichs, Friedhelm; Meyer, Heiner; Strunk, Rebecca; Kolkwitz, Benjamin; Epp, Jeremy

    2018-06-01

    The newly proposed concept of Process Signatures enables the comparison of seemingly different manufacturing processes via a process-independent approach based on the analysis of the loading condition and resulting material modification. This contribution compares the recently published results, based on numerically achieved data for the development of Process Signatures for sole surface and volume heatings without phase transformations, with the experimental data. The numerical approach applies the moving heat source theory in combination with energetic quantities. The external thermal loadings of both processes were characterized by the resulting temperature development, which correlates with a change in the residual stress state. The numerical investigations show that surface and volume heatings are interchangeable for certain parameter regimes regarding the changes in the residual stress state. Mainly, temperature gradients and thermal diffusion are responsible for the considered modifications. The applied surface- and volume-heating models are used in shallow cut grinding and induction heating, respectively. The comparison of numerical and experimental data reveals similarities, but also some systematic deviations of the residual stresses at the surface. The evaluation and final discussion support the assertion for very fast stress relaxation processes within the subsurface region. A consequence would be that the stress relaxation processes, which are not yet included in the numerical models, must be included in the Process Signatures for sole thermal impacts.

  12. Gas permeability of ice-templated, unidirectional porous ceramics

    NASA Astrophysics Data System (ADS)

    Seuba, Jordi; Deville, Sylvain; Guizard, Christian; Stevenson, Adam J.

    2016-01-01

    We investigate the gas flow behavior of unidirectional porous ceramics processed by ice-templating. The pore volume ranged between 54% and 72% and pore size between 2.9 ?m and 19.1 ?m. The maximum permeability (?? m?) was measured in samples with the highest total pore volume (72%) and pore size (19.1 ?m). However, we demonstrate that it is possible to achieve a similar permeability (?? m?) at 54% pore volume by modification of the pore shape. These results were compared with those reported and measured for isotropic porous materials processed by conventional techniques. In unidirectional porous materials tortuosity (?) is mainly controlled by pore size, unlike in isotropic porous structures where ? is linked to pore volume. Furthermore, we assessed the applicability of Ergun and capillary model in the prediction of permeability and we found that the capillary model accurately describes the gas flow behavior of unidirectional porous materials. Finally, we combined the permeability data obtained here with strength data for these materials to establish links between strength and permeability of ice-templated materials.

  13. The Statistical Interpretation of Classical Thermodynamic Heating and Expansion Processes

    ERIC Educational Resources Information Center

    Cartier, Stephen F.

    2011-01-01

    A statistical model has been developed and applied to interpret thermodynamic processes typically presented from the macroscopic, classical perspective. Through this model, students learn and apply the concepts of statistical mechanics, quantum mechanics, and classical thermodynamics in the analysis of the (i) constant volume heating, (ii)…

  14. Delay functions in trip assignment for transport planning process

    NASA Astrophysics Data System (ADS)

    Leong, Lee Vien

    2017-10-01

    In transportation planning process, volume-delay and turn-penalty functions are the functions needed in traffic assignment to determine travel time on road network links. Volume-delay function is the delay function describing speed-flow relationship while turn-penalty function is the delay function associated to making a turn at intersection. The volume-delay function used in this study is the revised Bureau of Public Roads (BPR) function with the constant parameters, α and β values of 0.8298 and 3.361 while the turn-penalty functions for signalized intersection were developed based on uniform, random and overflow delay models. Parameters such as green time, cycle time and saturation flow were used in the development of turn-penalty functions. In order to assess the accuracy of the delay functions, road network in areas of Nibong Tebal, Penang and Parit Buntar, Perak was developed and modelled using transportation demand forecasting software. In order to calibrate the models, phase times and traffic volumes at fourteen signalised intersections within the study area were collected during morning and evening peak hours. The prediction of assigned volumes using the revised BPR function and the developed turn-penalty functions show close agreement to actual recorded traffic volume with the lowest percentage of accuracy, 80.08% and the highest, 93.04% for the morning peak model. As for the evening peak model, they were 75.59% and 95.33% respectively for lowest and highest percentage of accuracy. As for the yield left-turn lanes, the lowest percentage of accuracy obtained for the morning and evening peak models were 60.94% and 69.74% respectively while the highest percentage of accuracy obtained for both models were 100%. Therefore, can be concluded that the development and utilisation of delay functions based on local road conditions are important as localised delay functions can produce better estimate of link travel times and hence better planning for future scenarios.

  15. Initial retrieval sequence and blending strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pemwell, D.L.; Grenard, C.E.

    1996-09-01

    This report documents the initial retrieval sequence and the methodology used to select it. Waste retrieval, storage, pretreatment and vitrification were modeled for candidate single-shell tank retrieval sequences. Performance of the sequences was measured by a set of metrics (for example,high-level waste glass volume, relative risk and schedule).Computer models were used to evaluate estimated glass volumes,process rates, retrieval dates, and blending strategy effects.The models were based on estimates of component inventories and concentrations, sludge wash factors and timing, retrieval annex limitations, etc.

  16. Estimating the volume of supra-glacial melt lakes across Greenland: A study of uncertainties derived from multi-platform water-reflectance models

    NASA Astrophysics Data System (ADS)

    Cordero-Llana, L.; Selmes, N.; Murray, T.; Scharrer, K.; Booth, A. D.

    2012-12-01

    Large volumes of water are necessary to propagate cracks to the glacial bed via hydrofractures. Hydrological models have shown that lakes above a critical volume can supply the necessary water for this process, so the ability to measure water depth in lakes remotely is important to study these processes. Previously, water depth has been derived from the optical properties of water using data from high resolution optical satellite images, as such ASTER, (Advanced Spaceborne Thermal Emission and Reflection Radiometer), IKONOS and LANDSAT. These studies used water-reflectance models based on the Bouguer-Lambert-Beer law and lack any estimation of model uncertainties. We propose an optimized model based on Sneed and Hamilton's (2007) approach to estimate water depths in supraglacial lakes and undertake a robust analysis of the errors for the first time. We used atmospherically-corrected data from ASTER and MODIS data as an input to the water-reflectance model. Three physical parameters are needed: namely bed albedo, water attenuation coefficient and reflectance of optically-deep water. These parameters were derived for each wavelength using standard calibrations. As a reference dataset, we obtained lake geometries using ICESat measurements over empty lakes. Differences between modeled and reference depths are used in a minimization model to obtain parameters for the water-reflectance model, yielding optimized lake depth estimates. Our key contribution is the development of a Monte Carlo simulation to run the water-reflectance model, which allows us to quantify the uncertainties in water depth and hence water volume. This robust statistical analysis provides better understanding of the sensitivity of the water-reflectance model to the choice of input parameters, which should contribute to the understanding of the influence of surface-derived melt-water on ice sheet dynamics. Sneed, W.A. and Hamilton, G.S., 2007: Evolution of melt pond volume on the surface of the Greenland Ice Sheet. Geophysical Research Letters, 34, 1-4.

  17. High-level waste borosilicate glass: A compendium of corrosion characteristics. Volume 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cunnane, J.C.; Bates, J.K.; Bradley, C.R.

    The objective of this document is to summarize scientific information pertinent to evaluating the extent to which high-level waste borosilicate glass corrosion and the associated radionuclide release processes are understood for the range of environmental conditions to which waste glass may be exposed in service. Alteration processes occurring within the bulk of the glass (e.g., devitrification and radiation-induced changes) are discussed insofar as they affect glass corrosion.This document is organized into three volumes. Volumes I and II represent a tiered set of information intended for somewhat different audiences. Volume I is intended to provide an overview of waste glass corrosion,more » and Volume 11 is intended to provide additional experimental details on experimental factors that influence waste glass corrosion. Volume III contains a bibliography of glass corrosion studies, including studies that are not cited in Volumes I and II. Volume I is intended for managers, decision makers, and modelers, the combined set of Volumes I, II, and III is intended for scientists and engineers working in the field of high-level waste.« less

  18. Meta-control of combustion performance with a data mining approach

    NASA Astrophysics Data System (ADS)

    Song, Zhe

    Large scale combustion process is complex and proposes challenges of optimizing its performance. Traditional approaches based on thermal dynamics have limitations on finding optimal operational regions due to time-shift nature of the process. Recent advances in information technology enable people collect large volumes of process data easily and continuously. The collected process data contains rich information about the process and, to some extent, represents a digital copy of the process over time. Although large volumes of data exist in industrial combustion processes, they are not fully utilized to the level where the process can be optimized. Data mining is an emerging science which finds patterns or models from large data sets. It has found many successful applications in business marketing, medical and manufacturing domains The focus of this dissertation is on applying data mining to industrial combustion processes, and ultimately optimizing the combustion performance. However the philosophy, methods and frameworks discussed in this research can also be applied to other industrial processes. Optimizing an industrial combustion process has two major challenges. One is the underlying process model changes over time and obtaining an accurate process model is nontrivial. The other is that a process model with high fidelity is usually highly nonlinear, solving the optimization problem needs efficient heuristics. This dissertation is set to solve these two major challenges. The major contribution of this 4-year research is the data-driven solution to optimize the combustion process, where process model or knowledge is identified based on the process data, then optimization is executed by evolutionary algorithms to search for optimal operating regions.

  19. Emergent neutrality drives phytoplankton species coexistence

    PubMed Central

    Segura, Angel M.; Calliari, Danilo; Kruk, Carla; Conde, Daniel; Bonilla, Sylvia; Fort, Hugo

    2011-01-01

    The mechanisms that drive species coexistence and community dynamics have long puzzled ecologists. Here, we explain species coexistence, size structure and diversity patterns in a phytoplankton community using a combination of four fundamental factors: organism traits, size-based constraints, hydrology and species competition. Using a ‘microscopic’ Lotka–Volterra competition (MLVC) model (i.e. with explicit recipes to compute its parameters), we provide a mechanistic explanation of species coexistence along a niche axis (i.e. organismic volume). We based our model on empirically measured quantities, minimal ecological assumptions and stochastic processes. In nature, we found aggregated patterns of species biovolume (i.e. clumps) along the volume axis and a peak in species richness. Both patterns were reproduced by the MLVC model. Observed clumps corresponded to niche zones (volumes) where species fitness was highest, or where fitness was equal among competing species. The latter implies the action of equalizing processes, which would suggest emergent neutrality as a plausible mechanism to explain community patterns. PMID:21177680

  20. Classroom Ideas for Encouraging Thinking and Feeling: A Total Creativity Program for Individualizing and Humanizing the Learning Process. Volume Five.

    ERIC Educational Resources Information Center

    Williams, Frank E.

    This volume, the final one in the series, presents about 400 ideas which teachers can use to teach creative thinking. The ideas are classified according to teacher behavior (strategies or modes of teaching) and by types of pupil behavior, as described in the rationale for the cognitive-affective instructional (CAI) model presented in volume 2. The…

  1. Tumor-volume simulation during radiotherapy for head-and-neck cancer using a four-level cell population model.

    PubMed

    Chvetsov, Alexei V; Dong, Lei; Palta, Jantinder R; Amdur, Robert J

    2009-10-01

    To develop a fast computational radiobiologic model for quantitative analysis of tumor volume during fractionated radiotherapy. The tumor-volume model can be useful for optimizing image-guidance protocols and four-dimensional treatment simulations in proton therapy that is highly sensitive to physiologic changes. The analysis is performed using two approximations: (1) tumor volume is a linear function of total cell number and (2) tumor-cell population is separated into four subpopulations: oxygenated viable cells, oxygenated lethally damaged cells, hypoxic viable cells, and hypoxic lethally damaged cells. An exponential decay model is used for disintegration and removal of oxygenated lethally damaged cells from the tumor. We tested our model on daily volumetric imaging data available for 14 head-and-neck cancer patients treated with an integrated computed tomography/linear accelerator system. A simulation based on the averaged values of radiobiologic parameters was able to describe eight cases during the entire treatment and four cases partially (50% of treatment time) with a maximum 20% error. The largest discrepancies between the model and clinical data were obtained for small tumors, which may be explained by larger errors in the manual tumor volume delineation procedure. Our results indicate that the change in gross tumor volume for head-and-neck cancer can be adequately described by a relatively simple radiobiologic model. In future research, we propose to study the variation of model parameters by fitting to clinical data for a cohort of patients with head-and-neck cancer and other tumors. The potential impact of other processes, like concurrent chemotherapy, on tumor volume should be evaluated.

  2. Optimization of critical quality attributes in continuous twin-screw wet granulation via design space validated with pilot scale experimental data.

    PubMed

    Liu, Huolong; Galbraith, S C; Ricart, Brendon; Stanton, Courtney; Smith-Goettler, Brandye; Verdi, Luke; O'Connor, Thomas; Lee, Sau; Yoon, Seongkyu

    2017-06-15

    In this study, the influence of key process variables (screw speed, throughput and liquid to solid (L/S) ratio) of a continuous twin screw wet granulation (TSWG) was investigated using a central composite face-centered (CCF) experimental design method. Regression models were developed to predict the process responses (motor torque, granule residence time), granule properties (size distribution, volume average diameter, yield, relative width, flowability) and tablet properties (tensile strength). The effects of the three key process variables were analyzed via contour and interaction plots. The experimental results have demonstrated that all the process responses, granule properties and tablet properties are influenced by changing the screw speed, throughput and L/S ratio. The TSWG process was optimized to produce granules with specific volume average diameter of 150μm and the yield of 95% based on the developed regression models. A design space (DS) was built based on volume average granule diameter between 90 and 200μm and the granule yield larger than 75% with a failure probability analysis using Monte Carlo simulations. Validation experiments successfully validated the robustness and accuracy of the DS generated using the CCF experimental design in optimizing a continuous TSWG process. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Real-time fMRI processing with physiological noise correction - Comparison with off-line analysis.

    PubMed

    Misaki, Masaya; Barzigar, Nafise; Zotev, Vadim; Phillips, Raquel; Cheng, Samuel; Bodurka, Jerzy

    2015-12-30

    While applications of real-time functional magnetic resonance imaging (rtfMRI) are growing rapidly, there are still limitations in real-time data processing compared to off-line analysis. We developed a proof-of-concept real-time fMRI processing (rtfMRIp) system utilizing a personal computer (PC) with a dedicated graphic processing unit (GPU) to demonstrate that it is now possible to perform intensive whole-brain fMRI data processing in real-time. The rtfMRIp performs slice-timing correction, motion correction, spatial smoothing, signal scaling, and general linear model (GLM) analysis with multiple noise regressors including physiological noise modeled with cardiac (RETROICOR) and respiration volume per time (RVT). The whole-brain data analysis with more than 100,000voxels and more than 250volumes is completed in less than 300ms, much faster than the time required to acquire the fMRI volume. Real-time processing implementation cannot be identical to off-line analysis when time-course information is used, such as in slice-timing correction, signal scaling, and GLM. We verified that reduced slice-timing correction for real-time analysis had comparable output with off-line analysis. The real-time GLM analysis, however, showed over-fitting when the number of sampled volumes was small. Our system implemented real-time RETROICOR and RVT physiological noise corrections for the first time and it is capable of processing these steps on all available data at a given time, without need for recursive algorithms. Comprehensive data processing in rtfMRI is possible with a PC, while the number of samples should be considered in real-time GLM. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Modeling of the thermal physical process and study on the reliability of linear energy density for selective laser melting

    NASA Astrophysics Data System (ADS)

    Xiang, Zhaowei; Yin, Ming; Dong, Guanhua; Mei, Xiaoqin; Yin, Guofu

    2018-06-01

    A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM) is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM.

  5. CrossTalk: The Journal of Defense Software Engineering. Volume 21, Number 10, October 2008

    DTIC Science & Technology

    2008-10-01

    proprietary modeling offerings, there is considerable conver- gence around Business Process Modeling Notation ( BPMN ). The research also found strong...support across vendors for the Business Process Execution Language standard, though there is also emerging support for direct execution of BPMN through...the use of the XML Process Definition Language, an XML serialization of BPMN . Many vendors also provide the needed moni- toring of those processes at

  6. TH-A-BRF-02: BEST IN PHYSICS (JOINT IMAGING-THERAPY) - Modeling Tumor Evolution for Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Lee, CG; Chan, TCY

    2014-06-15

    Purpose: To develop mathematical models of tumor geometry changes under radiotherapy that may support future adaptive paradigms. Methods: A total of 29 cervical patients were scanned using MRI, once for planning and weekly thereafter for treatment monitoring. Using the tumor volumes contoured by a radiologist, three mathematical models were investigated based on the assumption of a stochastic process of tumor evolution. The “weekly MRI” model predicts tumor geometry for the following week from the last two consecutive MRI scans, based on the voxel transition probability. The other two models use only the first pair of consecutive MRI scans, and themore » transition probabilities were estimated via tumor type classified from the entire data set. The classification is based on either measuring the tumor volume (the “weekly volume” model), or implementing an auxiliary “Markov chain” model. These models were compared to a constant volume approach that represents the current clinical practice, using various model parameters; e.g., the threshold probability β converts the probability map into a tumor shape (larger threshold implies smaller tumor). Model performance was measured using volume conformity index (VCI), i.e., the union of the actual target and modeled target volume squared divided by product of these two volumes. Results: The “weekly MRI” model outperforms the constant volume model by 26% on average, and by 103% for the worst 10% of cases in terms of VCI under a wide range of β. The “weekly volume” and “Markov chain” models outperform the constant volume model by 20% and 16% on average, respectively. They also perform better than the “weekly MRI” model when β is large. Conclusion: It has been demonstrated that mathematical models can be developed to predict tumor geometry changes for cervical cancer undergoing radiotherapy. The models can potentially support adaptive radiotherapy paradigm by reducing normal tissue dose. This research was supported in part by the Ontario Consortium for Adaptive Interventions in Radiation Oncology (OCAIRO) funded by the Ontario Research Fund (ORF) and the MITACS Accelerate Internship Program.« less

  7. Volume, size, professionals' specialization and nutrition management of NICUs and their association with treatment quality in VLBW infants.

    PubMed

    Miedaner, Felix; Langhammer, Kristina; Enke, Christian; Göpel, Wolfgang; Kribs, Angela; Nitzsche, Anika; Riedel, Rainer; Woopen, Christiane; Kuntz, Ludwig; Roth, Bernhard

    2018-04-01

    To assess the association of volume, size, the availability of highly-specialized professionals and nutrition management of NICUs with treatment quality among VLBW infants. A prospective multicenter study of 923 VLBW infants in 66 German NICUs, born between May and October 2013. Using multilevel modeling, we examined the association between the aforementioned organizational characteristics and treatment quality, measured via major morbidities (severe IVH, PVL, BPD, NEC, FIP, ROP, and discharge without severe complications) and medical process measures of VLBW infants. After risk-adjustment and accounting for other NICU characteristics, infants in low-volume NICUs were at higher risk of IVH, ROP and PVL. However, the initial effect of volume on process measures (growth velocity, administration of antenatal steroids) disappeared. Volume can only partially explain differences in the treatment quality of VLBWs. The underlying organizational mechanisms should be considered to improve the quality of care.

  8. 26th JANNAF Airbreathing Propulsion Subcommittee Meeting. Volume 1

    NASA Technical Reports Server (NTRS)

    Fry, Ronald S. (Editor); Gannaway, Mary T. (Editor)

    2002-01-01

    This volume, the first of four volumes, is a collection of 28 unclassified/unlimited-distribution papers which were presented at the Joint Army-Navy-NASA-Air Force (JANNAF) 26th Airbreathing Propulsion Subcommittee (APS) was held jointly with the 38th Combustion Subcommittee (CS), 20th Propulsion Systems Hazards Subcommittee (PSHS), and 2nd Modeling and Simulation Subcommittee. The meeting was held 8-12 April 2002 at the Bayside Inn at The Sandestin Golf & Beach Resort and Eglin Air Force Base, Destin, Florida. Topics covered include: scramjet and ramjet R&D program overviews; tactical propulsion; space access; NASA GTX status; PDE technology; actively cooled engine structures; modeling and simulation of complex hydrocarbon fuels and unsteady processes; and component modeling and simulation.

  9. Developing a stochastic traffic volume prediction model for public-private partnership projects

    NASA Astrophysics Data System (ADS)

    Phong, Nguyen Thanh; Likhitruangsilp, Veerasak; Onishi, Masamitsu

    2017-11-01

    Transportation projects require an enormous amount of capital investment resulting from their tremendous size, complexity, and risk. Due to the limitation of public finances, the private sector is invited to participate in transportation project development. The private sector can entirely or partially invest in transportation projects in the form of Public-Private Partnership (PPP) scheme, which has been an attractive option for several developing countries, including Vietnam. There are many factors affecting the success of PPP projects. The accurate prediction of traffic volume is considered one of the key success factors of PPP transportation projects. However, only few research works investigated how to predict traffic volume over a long period of time. Moreover, conventional traffic volume forecasting methods are usually based on deterministic models which predict a single value of traffic volume but do not consider risk and uncertainty. This knowledge gap makes it difficult for concessionaires to estimate PPP transportation project revenues accurately. The objective of this paper is to develop a probabilistic traffic volume prediction model. First, traffic volumes were estimated following the Geometric Brownian Motion (GBM) process. Monte Carlo technique is then applied to simulate different scenarios. The results show that this stochastic approach can systematically analyze variations in the traffic volume and yield more reliable estimates for PPP projects.

  10. Viscoelastic properties, gelation behavior and percolation theory model for the temperature induced forming (TIF) ceramic slurries

    NASA Astrophysics Data System (ADS)

    Yang, Yunpeng

    Controlled ceramic processing is required to produce ceramic parts with few strength-limiting defects and the economic forming of near net shape components. Temperature induced forming (TIF) is a novel ceramic forming process that uses colloidal processing to form ceramic green bodies by physical gelation. The dissertation research shows that TIF alumina suspensions (>40vol%) can be successfully fabricated by using 0.4wt% of ammonium citrate powder and <0.1wt% poly (acrylic acid) (PAA). It is found that increasing the volume fraction of alumina or the molecular weight of polymer will increase the shear viscosity and shear modulus. Larger molecular weight PAA tends to decrease the volume fraction gelation threshold of the alumina suspensions. The author is the first in this field to utilize the continuous percolation theory to interpret the evolution of the storage modulus with temperature for the TIF alumina suspensions. A model that relates the storage modulus with temperature and the volume fraction of solids is proposed. Calculated results using this percolation model show that the storage modulus of the suspensions can be affected by the volume fraction of solids, temperature, volume fraction gelation threshold and the percolation nature. The parameters in this model have been derived from the experimental data. The calculated results fit the measured data well. For the PAA-free TIF alumina suspensions, it is found that the ionization reaction of the magnesium citrate, which is induced by the pH or temperature of the suspensions, controls the flocculation of the suspensions. The percolation theory model was successfully applied to this type of suspension. Compared with the PAA addition TIF suspensions, these suspensions reflect a higher degree of percolation nature, as indicated by a larger value of percolation exponent. These results show that the percolation model proposed in this dissertation can be used to predict the gelation degree of the TIF suspensions. Complex-shape engineering ceramic parts have been successfully fabricated by direct casting using the TIF alumina suspensions, which has a relative density of ˜65%. The sintered sample at 1550°C for 2h is translucent and has a uniform grain size.

  11. Project Developmental Continuity Evaluation: Final Report. Volume II: The Process of Program Implementation in PDC.

    ERIC Educational Resources Information Center

    Wacker, Sally; And Others

    The second of two volumes, this document continues the final evaluation report of Project Developmental Continuity (PDC), a Head Start demonstration project initiated in 1974 to develop program models which enhance children's social competence by fostering developmental continuity from preschool through the early elementary grades. In particular,…

  12. Summation of IMS Volume Frequencies.

    ERIC Educational Resources Information Center

    Gordillo, Frank

    A computer program designed to produce summary information on the data processing volume of the Southwest Regional Laboratory's (SWRL) Instructional Management System (IMS) is described. Written in FORTRAN IV for use on an IBM 360 Model 91, the program sorts IMS input data on the basis of run identifier and on the basis of classroom identification…

  13. A Comparison of Two Fat Grafting Methods on Operating Room Efficiency and Costs.

    PubMed

    Gabriel, Allen; Maxwell, G Patrick; Griffin, Leah; Champaneria, Manish C; Parekh, Mousam; Macarios, David

    2017-02-01

    Centrifugation (Cf) is a common method of fat processing but may be time consuming, especially when processing large volumes. To determine the effects on fat grafting time, volume efficiency, reoperations, and complication rates of Cf vs an autologous fat processing system (Rv) that incorporates fat harvesting and processing in a single unit. We performed a retrospective cohort study of consecutive patients who underwent autologous fat grafting during reconstructive breast surgery with Rv or Cf. Endpoints measured were volume of fat harvested (lipoaspirate) and volume injected after processing, time to complete processing, reoperations, and complications. A budget impact model was used to estimate cost of Rv vs Cf. Ninety-eight patients underwent fat grafting with Rv, and 96 patients received Cf. Mean volumes of lipoaspirate (506.0 vs 126.1 mL) and fat injected (177.3 vs 79.2 mL) were significantly higher (P < .0001) in the Rv vs Cf group, respectively. Mean time to complete fat grafting was significantly shorter in the Rv vs Cf group (34.6 vs 90.1 minutes, respectively; P < .0001). Proportions of patients with nodule and cyst formation and/or who received reoperations were significantly less in the Rv vs Cf group. Based on these outcomes and an assumed per minute operating room cost, an average per patient cost savings of $2,870.08 was estimated with Rv vs Cf. Compared to Cf, the Rv fat processing system allowed for a larger volume of fat to be processed for injection and decreased operative time in these patients, potentially translating to cost savings. LEVEL OF EVIDENCE 3. © 2016 The American Society for Aesthetic Plastic Surgery, Inc.

  14. A MATHEMATICAL MODEL OF ELECTROSTATIC PRECIPITATION. (REVISION 1): VOLUME I. MODELING AND PROGRAMMING

    EPA Science Inventory

    The report briefly describes the fundamental mechanisms and limiting factors involved in the electrostatic precipitation process. It discusses theories and procedures used in the computer model to describe the physical mechanisms, and generally describes the major operations perf...

  15. Four-chamber heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal space learning and steerable features.

    PubMed

    Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin

    2008-11-01

    We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.

  16. COMO: a numerical model for predicting furnace performance in axisymmetric geometries. Volume 1. Technical summary. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiveland, W.A.; Oberjohn, W.J.; Cornelius, D.K.

    1985-12-01

    This report summarizes the work conducted during a 30-month contract with the United States Department of Energy (DOE) Pittsburgh Energy Technology Center (PETC). The general objective is to develop and verify a computer code capable of modeling the major aspects of pulverized coal combustion. Achieving this objective will lead to design methods applicable to industrial and utility furnaces. The combustion model (COMO) is based mainly on an existing Babcock and Wilcox (B and W) computer program. The model consists of a number of relatively independent modules that represent the major processes involved in pulverized coal combustion: flow, heterogeneous and homogeneousmore » chemical reaction, and heat transfer. As models are improved or as new ones are developed, this modular structure allows portions of the COMO model to be updated with minimal impact on the remainder of the program. The report consists of two volumes. This volume (Volume 1) contains a technical summary of the COMO model, results of predictions for gas phase combustion, pulverized coal combustion, and a detailed description of the COMO model. Volume 2 is the Users Guide for COMO and contains detailed instructions for preparing the input data and a description of the program output. Several example cases have been included to aid the user in usage of the computer program for pulverized coal applications. 66 refs., 41 figs., 21 tabs.« less

  17. Glass Property Data and Models for Estimating High-Level Waste Glass Volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vienna, John D.; Fluegel, Alexander; Kim, Dong-Sang

    2009-10-05

    This report describes recent efforts to develop glass property models that can be used to help estimate the volume of high-level waste (HLW) glass that will result from vitrification of Hanford tank waste. The compositions of acceptable and processable HLW glasses need to be optimized to minimize the waste-form volume and, hence, to save cost. A database of properties and associated compositions for simulated waste glasses was collected for developing property-composition models. This database, although not comprehensive, represents a large fraction of data on waste-glass compositions and properties that were available at the time of this report. Glass property-composition modelsmore » were fit to subsets of the database for several key glass properties. These models apply to a significantly broader composition space than those previously publised. These models should be considered for interim use in calculating properties of Hanford waste glasses.« less

  18. Feasibility study of an Integrated Program for Aerospace vehicle Design (IPAD). Volume 2: The design process

    NASA Technical Reports Server (NTRS)

    Gillette, W. B.; Turner, M. J.; Southall, J. W.; Whitener, P. C.; Kowalik, J. S.

    1973-01-01

    The extent to which IPAD is to support the design process is identified. Case studies of representative aerospace products were developed as models to characterize the design process and to provide design requirements for the IPAD computing system.

  19. ISRU System Model Tool: From Excavation to Oxygen Production

    NASA Technical Reports Server (NTRS)

    Santiago-Maldonado, Edgardo; Linne, Diane L.

    2007-01-01

    In the late 80's, conceptual designs for an in situ oxygen production plant were documented in a study by Eagle Engineering [1]. In the "Summary of Findings" of this study, it is clearly pointed out that: "reported process mass and power estimates lack a consistent basis to allow comparison." The study goes on to say: "A study to produce a set of process mass, power, and volume requirements on a consistent basis is recommended." Today, approximately twenty years later, as humans plan to return to the moon and venture beyond, the need for flexible up-to-date models of the oxygen extraction production process has become even more clear. Multiple processes for the production of oxygen from lunar regolith are being investigated by NASA, academia, and industry. Three processes that have shown technical merit are molten regolith electrolysis, hydrogen reduction, and carbothermal reduction. These processes have been selected by NASA as the basis for the development of the ISRU System Model Tool (ISMT). In working to develop up-to-date system models for these processes NASA hopes to accomplish the following: (1) help in the evaluation process to select the most cost-effective and efficient process for further prototype development, (2) identify key parameters, (3) optimize the excavation and oxygen production processes, and (4) provide estimates on energy and power requirements, mass and volume of the system, oxygen production rate, mass of regolith required, mass of consumables, and other important parameters. Also, as confidence and high fidelity is achieved with each component's model, new techniques and processes can be introduced and analyzed at a fraction of the cost of traditional hardware development and test approaches. A first generation ISRU System Model Tool has been used to provide inputs to the Lunar Architecture Team studies.

  20. Effect of hospital volume on processes of breast cancer care: A National Cancer Data Base study.

    PubMed

    Yen, Tina W F; Pezzin, Liliana E; Li, Jianing; Sparapani, Rodney; Laud, Purushuttom W; Nattinger, Ann B

    2017-05-15

    The purpose of this study was to examine variations in delivery of several breast cancer processes of care that are correlated with lower mortality and disease recurrence, and to determine the extent to which hospital volume explains this variation. Women who were diagnosed with stage I-III unilateral breast cancer between 2007 and 2011 were identified within the National Cancer Data Base. Multiple logistic regression models were developed to determine whether hospital volume was independently associated with each of 10 individual process of care measures addressing diagnosis and treatment, and 2 composite measures assessing appropriateness of systemic treatment (chemotherapy and hormonal therapy) and locoregional treatment (margin status and radiation therapy). Among 573,571 women treated at 1755 different hospitals, 38%, 51%, and 10% were treated at high-, medium-, and low-volume hospitals, respectively. On multivariate analysis controlling for patient sociodemographic characteristics, treatment year and geographic location, hospital volume was a significant predictor for cancer diagnosis by initial biopsy (medium volume: odds ratio [OR] = 1.15, 95% confidence interval [CI] = 1.05-1.25; high volume: OR = 1.30, 95% CI = 1.14-1.49), negative surgical margins (medium volume: OR = 1.15, 95% CI = 1.06-1.24; high volume: OR = 1.28, 95% CI = 1.13-1.44), and appropriate locoregional treatment (medium volume: OR = 1.12, 95% CI = 1.07-1.17; high volume: OR = 1.16, 95% CI = 1.09-1.24). Diagnosis of breast cancer before initial surgery, negative surgical margins and appropriate use of radiation therapy may partially explain the volume-survival relationship. Dissemination of these processes of care to a broader group of hospitals could potentially improve the overall quality of care and outcomes of breast cancer survivors. Cancer 2017;123:957-66. © 2016 American Cancer Society. © 2016 American Cancer Society.

  1. Formulating physical processes in a full-range model of soil water retention

    NASA Astrophysics Data System (ADS)

    Nimmo, J. R.

    2016-12-01

    Currently-used water retention models vary in how much their formulas correspond to controlling physical processes such as capillarity, adsorption, and air-trapping. In model development, realistic correspondence to physical processes has often been a lower priority than ease of use and compatibility with other models. For example, the wettest range is normally represented simplistically, as by a straight line of zero slope, or by default using the same formulation as for the middle range. The new model presented here recognizes dominant processes within three segments of the range from oven-dryness to saturation. The adsorption-dominated dry range is represented by a logarithmic relation used in earlier models. The middle range of capillary advance/retreat and Haines jumps is represented by a new adaptation of the lognormal distribution function. In the wet range, the expansion of trapped air in response to matric pressure change is important because (1) it displaces water, and (2) it triggers additional volume-adjusting processes such as the collapse of liquid bridges between air pockets. For this range, the model incorporates the Boyles' law inverse-proportionality of trapped air volume and pressure, amplified by an empirical factor to account for the additional processes. With their basis in processes, the model's parameters have a strong physical interpretation, and in many cases can be assigned values from knowledge of fundamental relationships or individual measurements. An advantage of the physically-plausible treatment of the wet range is that it avoids such problems as the blowing-up of derivatives on approach to saturation, enhancing the model's utility for important but challenging wet-range phenomena such as domain exchange between preferential flow paths and soil matrix. Further development might be able to accommodate hysteresis by a systematic adjustment of the relation between the wet and middle ranges.

  2. Exploiting Satellite Archives to Estimate Global Glacier Volume Changes

    NASA Astrophysics Data System (ADS)

    McNabb, R. W.; Nuth, C.; Kääb, A.; Girod, L.

    2017-12-01

    In the past decade, the availability of, and ability to process, remote sensing data over glaciers has expanded tremendously. Newly opened satellite image archives, combined with new processing techniques as well as increased computing power and storage capacity, have given the glaciological community the ability to observe and investigate glaciological processes and changes on a truly global scale. In particular, the opening of the ASTER archives provides further opportunities to both estimate and monitor glacier elevation and volume changes globally, including potentially on sub-annual timescales. With this explosion of data availability, however, comes the challenge of seeing the forest instead of the trees. The high volume of data available means that automated detection and proper handling of errors and biases in the data becomes critical, in order to properly study the processes that we wish to see. This includes holes and blunders in digital elevation models (DEMs) derived from optical data or penetration of radar signals leading to biases in DEMs derived from radar data, among other sources. Here, we highlight new advances in the ability to sift through high-volume datasets, and apply these techniques to estimate recent glacier volume changes in the Caucasus Mountains, Scandinavia, Africa, and South America. By properly estimating and correcting for these biases, we additionally provide a detailed accounting of the uncertainties in these estimates of volume changes, leading to more reliable results that have applicability beyond the glaciological community.

  3. Finite size effects in phase transformation kinetics in thin films and surface layers

    NASA Astrophysics Data System (ADS)

    Trofimov, Vladimir I.; Trofimov, Ilya V.; Kim, Jong-Il

    2004-02-01

    In studies of phase transformation kinetics in thin films, e.g. crystallization of amorphous films, until recent time is widely used familiar Kolmogorov-Johnson-Mehl-Avrami (KJMA) statistical model of crystallization despite it is applicable only to an infinite medium. In this paper a model of transformation kinetics in thin films based on a concept of the survival probability for randomly chosen point during transformation process is presented. Two model versions: volume induced transformation (VIT) when the second-phase grains nucleate over a whole film volume and surface induced transformation (SIT) when they form on an interface with two nucleation mode: instantaneous nucleation at transformation onset and continuous one during all the process are studied. At VIT-process due to the finite film thickness effects the transformation profile has a maximum in a film middle, whereas that of the grains population reaches a minimum inhere, the grains density is always higher than in a volume material, and the thinner film the slower it transforms. The transformation kinetics in a thin film obeys a generalized KJMA equation with parameters depending on a film thickness and in limiting cases of extremely thin and thick film it reduces to classical KJMA equation for 2D- and 3D-system, respectively.

  4. Proton Radiography Peers into Metal Solidification

    DOE PAGES

    Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; ...

    2013-06-19

    Historically, metals are cut up and polished to see the structure and to infer how processing influences the evolution. We can now peer into a metal during processing without destroying it using proton radiography. Understanding the link between processing and structure is important because structure profoundly affects the properties of engineering materials. Synchrotron x-ray radiography has enabled real-time glimpses into metal solidification. However, x-ray energies favor the examination of small volumes and low density metals. In this study, we use high energy proton radiography for the first time to image a large metal volume (>10,000 mm 3) during melting andmore » solidification. We also show complementary x-ray results from a small volume (<1mm 3), bridging four orders of magnitude. In conclusion, real-time imaging will enable efficient process development and the control of the structure evolution to make materials with intended properties; it will also permit the development of experimentally informed, predictive structure and process models.« less

  5. Maximum volume cuboids for arbitrarily shaped in-situ rock blocks as determined by discontinuity analysis—A genetic algorithm approach

    NASA Astrophysics Data System (ADS)

    Ülker, Erkan; Turanboy, Alparslan

    2009-07-01

    The block stone industry is one of the main commercial use of rock. The economic potential of any block quarry depends on the recovery rate, which is defined as the total volume of useful rough blocks extractable from a fixed rock volume in relation to the total volume of moved material. The natural fracture system, the rock type(s) and the extraction method used directly influence the recovery rate. The major aims of this study are to establish a theoretical framework for optimising the extraction process in marble quarries for a given fracture system, and for predicting the recovery rate of the excavated blocks. We have developed a new approach by taking into consideration only the fracture structure for maximum block recovery in block quarries. The complete model uses a linear approach based on basic geometric features of discontinuities for 3D models, a tree structure (TS) for individual investigation and finally a genetic algorithm (GA) for the obtained cuboid volume(s). We tested our new model in a selected marble quarry in the town of İscehisar (AFYONKARAHİSAR—TURKEY).

  6. Testing the generality of the zoom-lens model: Evidence for visual-pathway specific effects of attended-region size on perception.

    PubMed

    Goodhew, Stephanie C; Lawrence, Rebecca K; Edwards, Mark

    2017-05-01

    There are volumes of information available to process in visual scenes. Visual spatial attention is a critically important selection mechanism that prevents these volumes from overwhelming our visual system's limited-capacity processing resources. We were interested in understanding the effect of the size of the attended area on visual perception. The prevailing model of attended-region size across cognition, perception, and neuroscience is the zoom-lens model. This model stipulates that the magnitude of perceptual processing enhancement is inversely related to the size of the attended region, such that a narrow attended-region facilitates greater perceptual enhancement than a wider region. Yet visual processing is subserved by two major visual pathways (magnocellular and parvocellular) that operate with a degree of independence in early visual processing and encode contrasting visual information. Historically, testing of the zoom-lens has used measures of spatial acuity ideally suited to parvocellular processing. This, therefore, raises questions about the generality of the zoom-lens model to different aspects of visual perception. We found that while a narrow attended-region facilitated spatial acuity and the perception of high spatial frequency targets, it had no impact on either temporal acuity or the perception of low spatial frequency targets. This pattern also held up when targets were not presented centrally. This supports the notion that visual attended-region size has dissociable effects on magnocellular versus parvocellular mediated visual processing.

  7. On the Relationship of the Fractal Dimension of Structure with the State of Drying Drops of Crystallizing Solutions (Thermodynamic and Experimental Modeling)

    NASA Astrophysics Data System (ADS)

    Golovanova, O. A.; Chikanova, E. S.; Fedoseev, V. B.

    2018-05-01

    The processes occurring in aqueous salt solutions have been investigated based on thermodynamic and experimental modeling. The self-organization in a drying drop of dehydrated liquids is analyzed using the fractal theory, due to which the quantitative characteristics of the crystallization processes in a small volume are obtained.

  8. Salient regions detection using convolutional neural networks and color volume

    NASA Astrophysics Data System (ADS)

    Liu, Guang-Hai; Hou, Yingkun

    2018-03-01

    Convolutional neural network is an important technique in machine learning, pattern recognition and image processing. In order to reduce the computational burden and extend the classical LeNet-5 model to the field of saliency detection, we propose a simple and novel computing model based on LeNet-5 network. In the proposed model, hue, saturation and intensity are utilized to extract depth cues, and then we integrate depth cues and color volume to saliency detection following the basic structure of the feature integration theory. Experimental results show that the proposed computing model outperforms some existing state-of-the-art methods on MSRA1000 and ECSSD datasets.

  9. Sensors, Volume 1, Fundamentals and General Aspects

    NASA Astrophysics Data System (ADS)

    Grandke, Thomas; Ko, Wen H.

    1996-12-01

    'Sensors' is the first self-contained series to deal with the whole area of sensors. It describes general aspects, technical and physical fundamentals, construction, function, applications and developments of the various types of sensors. This volume deals with the fundamentals and common principles of sensors and covers the wide areas of principles, technologies, signal processing, and applications. Contents include: Sensor Fundamentals, e.g. Sensor Parameters, Modeling, Design and Packaging; Basic Sensor Technologies, e.g. Thin and Thick Films, Integrated Magnetic Sensors, Optical Fibres and Intergrated Optics, Ceramics and Oxides; Sensor Interfaces, e.g. Signal Processing, Multisensor Signal Processing, Smart Sensors, Interface Systems; Sensor Applications, e.g. Automotive: On-board Sensors, Traffic Surveillance and Control, Home Appliances, Environmental Monitoring, etc. This volume is an indispensable reference work and text book for both specialits and newcomers, researchers and developers.

  10. Image analysis and mathematical modelling for the supervision of the dough fermentation process

    NASA Astrophysics Data System (ADS)

    Zettel, Viktoria; Paquet-Durand, Olivier; Hecker, Florian; Hitzmann, Bernd

    2016-10-01

    The fermentation (proof) process of dough is one of the quality-determining steps in the production of baking goods. Beside the fluffiness, whose fundaments are built during fermentation, the flavour of the final product is influenced very much during this production stage. However, until now no on-line measurement system is available, which can supervise this important process step. In this investigation the potential of an image analysis system is evaluated, that enables the determination of the volume of fermented dough pieces. The camera is moving around the fermenting pieces and collects images from the objects by means of different angles (360° range). Using image analysis algorithms the volume increase of individual dough pieces is determined. Based on a detailed mathematical description of the volume increase, which based on the Bernoulli equation, carbon dioxide production rate of yeast cells and the diffusion processes of carbon dioxide, the fermentation process is supervised. Important process parameters, like the carbon dioxide production rate of the yeast cells and the dough viscosity can be estimated just after 300 s of proofing. The mean percentage error for forecasting the further evolution of the relative volume of the dough pieces is just 2.3 %. Therefore, a forecast of the further evolution can be performed and used for fault detection.

  11. Monitoring landscape level processes using remote sensing of large plots

    Treesearch

    Raymond L. Czaplewski

    1991-01-01

    Global and regional assessaents require timely information on landscape level status (e.g., areal extent of different ecosystems) and processes (e.g., changes in land use and land cover). To measure and understand these processes at the regional level, and model their impacts, remote sensing is often necessary. However, processing massive volumes of remotely sensing...

  12. Simulating the water budget of a Prairie Potholes complex from LiDAR and hydrological models in North Dakota, USA

    USGS Publications Warehouse

    Huang, Shengli; Young, Claudia; Abdul-Aziz, Omar I.; Dahal, Devendra; Feng, Min; Liu, Shuguang

    2013-01-01

    Hydrological processes of the wetland complex in the Prairie Pothole Region (PPR) are difficult to model, partly due to a lack of wetland morphology data. We used Light Detection And Ranging (LiDAR) data sets to derive wetland features; we then modelled rainfall, snowfall, snowmelt, runoff, evaporation, the “fill-and-spill” mechanism, shallow groundwater loss, and the effect of wet and dry conditions. For large wetlands with a volume greater than thousands of cubic metres (e.g. about 3000 m3), the modelled water volume agreed fairly well with observations; however, it did not succeed for small wetlands (e.g. volume less than 450 m3). Despite the failure for small wetlands, the modelled water area of the wetland complex coincided well with interpretation of aerial photographs, showing a linear regression with R2 of around 0.80 and a mean average error of around 0.55 km2. The next step is to improve the water budget modelling for small wetlands.

  13. Simulation and optimization of volume holographic imaging systems in Zemax.

    PubMed

    Wissmann, Patrick; Oh, Se Baek; Barbastathis, George

    2008-05-12

    We present a new methodology for ray-tracing analysis of volume holographic imaging (VHI) systems. Using the k-sphere formulation, we apply geometrical relationships to describe the volumetric diffraction effects imposed on rays passing through a volume hologram. We explain the k-sphere formulation in conjunction with ray tracing process and describe its implementation in a Zemax UDS (User Defined Surface). We conclude with examples of simulation and optimization results and show proof of consistency and usefulness of the proposed model.

  14. Effects of large volume injection of aliphatic alcohols as sample diluents on the retention of low hydrophobic solutes in reversed-phase liquid chromatography.

    PubMed

    David, Victor; Galaon, Toma; Aboul-Enein, Hassan Y

    2014-01-03

    Recent studies showed that injection of large volume of hydrophobic solvents used as sample diluents could be applied in reversed-phase liquid chromatography (RP-LC). This study reports a systematic research focused on the influence of a series of aliphatic alcohols (from methanol to 1-octanol) on the retention process in RP-LC, when large volumes of sample are injected on the column. Several model analytes with low hydrophobic character were studied by RP-LC process, for mobile phases containing methanol or acetonitrile as organic modifiers in different proportions with aqueous component. It was found that starting with 1-butanol, the aliphatic alcohols can be used as sample solvents and they can be injected in high volumes, but they may influence the retention factor and peak shape of the dissolved solutes. The dependence of the retention factor of the studied analytes on the injection volume of these alcohols is linear, with a decrease of its value as the sample volume is increased. The retention process in case of injecting up to 200μL of upper alcohols is dependent also on the content of the organic modifier (methanol or acetonitrile) in mobile phase. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Development of a hip joint model for finite volume simulations.

    PubMed

    Cardiff, P; Karač, A; FitzPatrick, D; Ivanković, A

    2014-01-01

    This paper establishes a procedure for numerical analysis of a hip joint using the finite volume method. Patient-specific hip joint geometry is segmented directly from computed tomography and magnetic resonance imaging datasets and the resulting bone surfaces are processed into a form suitable for volume meshing. A high resolution continuum tetrahedral mesh has been generated, where a sandwich model approach is adopted; the bones are represented as a stiffer cortical shells surrounding more flexible cancellous cores. Cartilage is included as a uniform thickness extruded layer and the effect of layer thickness is investigated. To realistically position the bones, gait analysis has been performed giving the 3D positions of the bones for the full gait cycle. Three phases of the gait cycle are examined using a finite volume based custom structural contact solver implemented in open-source software OpenFOAM.

  16. Cellular volume regulation and substrate stiffness modulate the detachment dynamics of adherent cells

    NASA Astrophysics Data System (ADS)

    Yang, Yuehua; Jiang, Hongyuan

    2018-03-01

    Quantitative characterizations of cell detachment are vital for understanding the fundamental mechanisms of cell adhesion. Experiments have found that cell detachment shows strong rate dependence, which is mostly attributed to the binding-unbinding kinetics of receptor-ligand bond. However, our recent study showed that the cellular volume regulation can significantly regulate the dynamics of adherent cell and cell detachment. How this cellular volume regulation contributes to the rate dependence of cell detachment remains elusive. Here, we systematically study the role of cellular volume regulation in the rate dependence of cell detachment by investigating the cell detachments of nonspecific adhesion and specific adhesion. We find that the cellular volume regulation and the bond kinetics dominate the rate dependence of cell detachment at different time scales. We further test the validity of the traditional Johnson-Kendall-Roberts (JKR) contact model and the detachment model developed by Wyart and Gennes et al (W-G model). When the cell volume is changeable, the JKR model is not appropriate for both the detachments of convex cells and concave cells. The W-G model is valid for the detachment of convex cells but is no longer applicable for the detachment of concave cells. Finally, we show that the rupture force of adherent cells is also highly sensitive to substrate stiffness, since an increase in substrate stiffness will lead to more associated bonds. These findings can provide insight into the critical role of cell volume in cell detachment and might have profound implications for other adhesion-related physiological processes.

  17. Identification of the states of the processes at liquid cathodes under potentiostatic conditions using semantic diagram models

    NASA Astrophysics Data System (ADS)

    Smirnov, G. B.; Markina, S. E.; Tomashevich, V. G.

    2012-08-01

    A technique is described for constructing semantic diagram models of the electrolysis at a liquid cathode in a salt halide melt under potentiostatic conditions that are intended for identifying the static states of this system that correspond to certain combinations of the electrode processes or the processes occurring in the volumes of salt and liquid-metal phases. Examples are given for the discharge of univalent and polyvalent metals.

  18. The unfolding effects on the protein hydration shell and partial molar volume: a computational study.

    PubMed

    Del Galdo, Sara; Amadei, Andrea

    2016-10-12

    In this paper we apply the computational analysis recently proposed by our group to characterize the solvation properties of a native protein in aqueous solution, and to four model aqueous solutions of globular proteins in their unfolded states thus characterizing the protein unfolded state hydration shell and quantitatively evaluating the protein unfolded state partial molar volumes. Moreover, by using both the native and unfolded protein partial molar volumes, we obtain the corresponding variations (unfolding partial molar volumes) to be compared with the available experimental estimates. We also reconstruct the temperature and pressure dependence of the unfolding partial molar volume of Myoglobin dissecting the structural and hydration effects involved in the process.

  19. Quantifying sediment connectivity in an actively eroding gully complex, Waipaoa catchment, New Zealand

    NASA Astrophysics Data System (ADS)

    Taylor, Richard J.; Massey, Chris; Fuller, Ian C.; Marden, Mike; Archibald, Garth; Ries, William

    2018-04-01

    Using a combination of airborne LiDAR (2005) and terrestrial laser scanning (2007, 2008, 2010, 2011), sediment delivery processes and sediment connectivity in an 20-ha gully complex, which significantly contributes to the Waipaoa sediment cascade, are quantified over a 6-year period. The acquisition of terrain data from high-resolution surveys of the whole gully-fan system provides new insights into slope processes and slope-channel linkages operating in the complex. Raw terrain data from the airborne and ground-based laser scans were converted into raster DEMs with a vertical accuracy between surveys of <±0.1 m. Grid elevations in each successive DEM were subtracted from the previous DEM to provide models of change across the gully and fan complex. In these models deposition equates to positive and erosion to negative vertical change. Debris flows, slumping, and erosion by surface runoff (gullying in the conventional sense) generated on average 95,232 m3 of sediment annually, with a standard deviation of ± 20,806 m3. The volumes of debris eroded from those areas dominated by surface erosion processes were higher than in areas dominated by landslide processes. Over the six-year study period, sediment delivery from the source zones to the fan was a factor of 1.4 times larger than the volume of debris exported from the fan into Te Weraroa Stream. The average annual volume of sediment exported to Te Weraroa Stream varies widely from 23,195 to 102,796 m3. Fluctuations in the volume of stored sediment within the fan, rather than external forcing by rainstorms or earthquakes, account for this annual variation. No large rainfall events occurred during the monitoring period; therefore, sediment volumes and transfer processes captured by this study are representative of the background conditions that operate in this geomorphic system.

  20. Gas permeability of ice-templated, unidirectional porous ceramics.

    PubMed

    Seuba, Jordi; Deville, Sylvain; Guizard, Christian; Stevenson, Adam J

    2016-01-01

    We investigate the gas flow behavior of unidirectional porous ceramics processed by ice-templating. The pore volume ranged between 54% and 72% and pore size between 2.9 [Formula: see text]m and 19.1 [Formula: see text]m. The maximum permeability ([Formula: see text] [Formula: see text] m[Formula: see text]) was measured in samples with the highest total pore volume (72%) and pore size (19.1 [Formula: see text]m). However, we demonstrate that it is possible to achieve a similar permeability ([Formula: see text] [Formula: see text] m[Formula: see text]) at 54% pore volume by modification of the pore shape. These results were compared with those reported and measured for isotropic porous materials processed by conventional techniques. In unidirectional porous materials tortuosity ([Formula: see text]) is mainly controlled by pore size, unlike in isotropic porous structures where [Formula: see text] is linked to pore volume. Furthermore, we assessed the applicability of Ergun and capillary model in the prediction of permeability and we found that the capillary model accurately describes the gas flow behavior of unidirectional porous materials. Finally, we combined the permeability data obtained here with strength data for these materials to establish links between strength and permeability of ice-templated materials.

  1. United States Air Force Research Initiation Program for 1987. Volume 3

    DTIC Science & Technology

    1989-04-01

    Influence of Microstructural Variations Dr. Ravinder Diwan on the Thermomechanical Processing in Dynamic Material Modeling of Titanium Aluminides , 760,7MG...7MG-077 INFLUENCE OF MICROSTRUCTURAL VARIATIONS ON THE THERMOMECHANICAL PROCESSING IN DYNAMIC MATERIAL MODELING OF TITANIUM ALUMINIDES MARCH 15, 1989...provided on this project. Final Report Submitted: March 15, 1989. 75-1 ABSTRACT Titanium aluminides with strong thermodynamically stable intermetallic phases

  2. Dynamic soft tissue deformation estimation based on energy analysis

    NASA Astrophysics Data System (ADS)

    Gao, Dedong; Lei, Yong; Yao, Bin

    2016-10-01

    The needle placement accuracy of millimeters is required in many needle-based surgeries. The tissue deformation, especially that occurring on the surface of organ tissue, affects the needle-targeting accuracy of both manual and robotic needle insertions. It is necessary to understand the mechanism of tissue deformation during needle insertion into soft tissue. In this paper, soft tissue surface deformation is investigated on the basis of continuum mechanics, where a geometry model is presented to quantitatively approximate the volume of tissue deformation. The energy-based method is presented to the dynamic process of needle insertion into soft tissue based on continuum mechanics, and the volume of the cone is exploited to quantitatively approximate the deformation on the surface of soft tissue. The external work is converted into potential, kinetic, dissipated, and strain energies during the dynamic rigid needle-tissue interactive process. The needle insertion experimental setup, consisting of a linear actuator, force sensor, needle, tissue container, and a light, is constructed while an image-based method for measuring the depth and radius of the soft tissue surface deformations is introduced to obtain the experimental data. The relationship between the changed volume of tissue deformation and the insertion parameters is created based on the law of conservation of energy, with the volume of tissue deformation having been obtained using image-based measurements. The experiments are performed on phantom specimens, and an energy-based analytical fitted model is presented to estimate the volume of tissue deformation. The experimental results show that the energy-based analytical fitted model can predict the volume of soft tissue deformation, and the root mean squared errors of the fitting model and experimental data are 0.61 and 0.25 at the velocities 2.50 mm/s and 5.00 mm/s. The estimating parameters of the soft tissue surface deformations are proven to be useful for compensating the needle-targeting error in the rigid needle insertion procedure, especially for percutaneous needle insertion into organs.

  3. Soot volume fraction fields in unsteady axis-symmetric flames by continuous laser extinction technique.

    PubMed

    Kashif, Muhammad; Bonnety, Jérôme; Guibert, Philippe; Morin, Céline; Legros, Guillaume

    2012-12-17

    A Laser Extinction Method has been set up to provide two-dimensional soot volume fraction field time history at a tunable frequency up to 70 Hz inside an axis-symmetric diffusion flame experiencing slow unsteady phenomena preserving the symmetry. The use of a continuous wave laser as the light source enables this repetition rate, which is an incremental advance in the laser extinction technique. The technique is shown to allow a fine description of the soot volume fraction field in a flickering flame exhibiting a 12.6 Hz flickering phenomenon. Within this range of repetition rate, the technique and its subsequent post-processing require neither any method for time-domain reconstruction nor any correction for energy intrusion. Possibly complemented by such a reconstruction method, the technique should support further soot volume fraction database in oscillating flames that exhibit characteristic times relevant to the current efforts in the validation of soot processes modeling.

  4. Identification of the states of the processes that occur on solid cathodes in the potentiostatic electrolysis mode using semantic diagram models

    NASA Astrophysics Data System (ADS)

    Smirnov, G. B.; Markina, S. E.; Tomashevich, V. G.

    2011-02-01

    A procedure is proposed to construct semantic diagram models for the electrolysis on a solid cathode in a salt halide melt under potentiostatic conditions. These models are intended to identify the static states of the system that correspond to a certain combination of the processes occurring on an electrode and in the system volume. Examples for discharging of univalent and polyvalent metals are given.

  5. A Comparative Study of Spectral Auroral Intensity Predictions From Multiple Electron Transport Models

    NASA Astrophysics Data System (ADS)

    Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha

    2018-01-01

    It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.

  6. Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu

    1995-01-01

    As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.

  7. A new fundamental model of moving particle for reinterpreting Schroedinger equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Umar, Muhamad Darwis

    2012-06-20

    The study of Schroedinger equation based on a hypothesis that every particle must move randomly in a quantum-sized volume has been done. In addition to random motion, every particle can do relative motion through the movement of its quantum-sized volume. On the other way these motions can coincide. In this proposed model, the random motion is one kind of intrinsic properties of the particle. The every change of both speed of randomly intrinsic motion and or the velocity of translational motion of a quantum-sized volume will represent a transition between two states, and the change of speed of randomly intrinsicmore » motion will generate diffusion process or Brownian motion perspectives. Diffusion process can take place in backward and forward processes and will represent a dissipative system. To derive Schroedinger equation from our hypothesis we use time operator introduced by Nelson. From a fundamental analysis, we find out that, naturally, we should view the means of Newton's Law F(vector sign) = ma(vector sign) as no an external force, but it is just to describe both the presence of intrinsic random motion and the change of the particle energy.« less

  8. 38th JANNAF Combustion Subcommittee Meeting. Volume 1

    NASA Technical Reports Server (NTRS)

    Fry, Ronald S. (Editor); Eggleston, Debra S. (Editor); Gannaway, Mary T. (Editor)

    2002-01-01

    This volume, the first of two volumes, is a collection of 55 unclassified/unlimited-distribution papers which were presented at the Joint Army-Navy-NASA-Air Force (JANNAF) 38th Combustion Subcommittee (CS), 26 th Airbreathing Propulsion Subcommittee (APS), 20th Propulsion Systems Hazards Subcommittee (PSHS), and 21 Modeling and Simulation Subcommittee. The meeting was held 8-12 April 2002 at the Bayside Inn at The Sandestin Golf & Beach Resort and Eglin Air Force Base, Destin, Florida. Topics cover five major technology areas including: 1) Combustion - Propellant Combustion, Ingredient Kinetics, Metal Combustion, Decomposition Processes and Material Characterization, Rocket Motor Combustion, and Liquid & Hybrid Combustion; 2) Liquid Rocket Engines - Low Cost Hydrocarbon Liquid Rocket Engines, Liquid Propulsion Turbines, Liquid Propulsion Pumps, and Staged Combustion Injector Technology; 3) Modeling & Simulation - Development of Multi- Disciplinary RBCC Modeling, Gun Modeling, and Computational Modeling for Liquid Propellant Combustion; 4) Guns Gun Propelling Charge Design, and ETC Gun Propulsion; and 5) Airbreathing - Scramjet an Ramjet- S&T Program Overviews.

  9. Resin Film Infusion (RFI) Process Modeling for Large Transport Aircraft Wing Structures

    NASA Technical Reports Server (NTRS)

    Knott, Tamara W.; Loos, Alfred C.

    2000-01-01

    Resin film infusion (RFI) is a cost-effective method for fabricating stiffened aircraft wing structures. The RFI process lends itself to the use of near net shape textile preforms manufactured through a variety of automated textile processes such as knitting and braiding. Often, these advanced fiber architecture preforms have through-the-thickness stitching for improved damage tolerance and delamination resistance. The challenge presently facing RFI is to refine the process to ensure complete infiltration and cure of a geometrically complex shape preform with the high fiber volume fraction needed for structural applications. An accurate measurement of preform permeability is critical for successful modeling of the RFI resin infiltration process. Small changes in the permeability can result in very different infiltration behavior and times. Therefore, it is important to accurately measure the permeabilities of the textile preforms used in the RFI process. The objective of this investigation was to develop test methods that can be used to measure the compaction behavior and permeabilities of high fiber volume fraction, advanced fiber architecture textile preforms. These preforms are often highly compacted due to through-the-thickness stitching used to improve damage tolerance. Test fixtures were designed and fabricated and used to measure both transverse and in-plane permeabilities. The fixtures were used to measure the permeabilities of multiaxial warp knit and triaxial braided preforms at fiber volume fractions from 55% to 65%. In addition, the effects of stitching characteristics, thickness, and batch variability on permeability and compaction behavior were investigated.

  10. Assessment of edema volume in skin upon injury in a mouse ear model with optical coherence tomography

    PubMed Central

    Qin, Wan

    2017-01-01

    Accurate measurement of edema volume is essential for the investigation of tissue response and recovery following a traumatic injury. The measurements must be noninvasive and repetitive over time so as to monitor tissue response throughout the healing process. Such techniques are particularly necessary for the evaluation of therapeutics that are currently in development to suppress or prevent edema formation. In this study, we propose to use optical coherence tomography (OCT) technique to image and quantify edema in a mouse ear model where the injury is induced by a superficial-thickness burn. Extraction of edema volume is achieved by an attenuation compensation algorithm performed on the three-dimensional OCT images, followed by two segmentation procedures. In addition to edema volume, the segmentation method also enables accurate thickness mapping of edematous tissue, which is an important characteristic of the external symptoms of edema. To the best of our knowledge, this is the first method for noninvasively measuring absolute edema volume. PMID:27282161

  11. Control volume based hydrocephalus research; analysis of human data

    NASA Astrophysics Data System (ADS)

    Cohen, Benjamin; Wei, Timothy; Voorhees, Abram; Madsen, Joseph; Anor, Tomer

    2010-11-01

    Hydrocephalus is a neuropathophysiological disorder primarily diagnosed by increased cerebrospinal fluid volume and pressure within the brain. To date, utilization of clinical measurements have been limited to understanding of the relative amplitude and timing of flow, volume and pressure waveforms; qualitative approaches without a clear framework for meaningful quantitative comparison. Pressure volume models and electric circuit analogs enforce volume conservation principles in terms of pressure. Control volume analysis, through the integral mass and momentum conservation equations, ensures that pressure and volume are accounted for using first principles fluid physics. This approach is able to directly incorporate the diverse measurements obtained by clinicians into a simple, direct and robust mechanics based framework. Clinical data obtained for analysis are discussed along with data processing techniques used to extract terms in the conservation equation. Control volume analysis provides a non-invasive, physics-based approach to extracting pressure information from magnetic resonance velocity data that cannot be measured directly by pressure instrumentation.

  12. Behaviors and kinetics of toluene adsorption-desorption on activated carbons with varying pore structure.

    PubMed

    Yang, Xi; Yi, Honghong; Tang, Xiaolong; Zhao, Shunzheng; Yang, Zhongyu; Ma, Yueqiang; Feng, Tiecheng; Cui, Xiaoxu

    2018-05-01

    This work was undertaken to investigate the behaviors and kinetics of toluene adsorption and desorption on activated carbons with varying pore structure. Five kinds of activated carbon from different raw materials were selected. Adsorption isotherms and breakthrough curves for toluene were measured. Langmuir and Freundlich equations were fitted to the equilibrium data, and the Freundlich equation was more suitable for simulating toluene adsorption. The process consisted of monolayer, multilayer and partial active site adsorption types. The effect of the pore structure of the activated carbons on toluene adsorption capacity was investigated. The quasi-first-order model was more suitable for describing the process than the quasi-second-order model. The adsorption data was also modeled by the internal particle diffusion model and it was found that the adsorption process could be divided into three stages. In the external surface adsorption process, the rate depended on the specific surface area. During the particle diffusion stage, pore structure and volume were the main factors affecting adsorption rate. In the final equilibrium stage, the rate was determined by the ratio of meso- and macro-pores to total pore volume. The rate over the whole adsorption process was dominated by the toluene concentration. The desorption behavior of toluene on activated carbons was investigated, and the process was divided into heat and mass transfer parts corresponding to emission and diffusion mechanisms, respectively. Physical adsorption played the main role during the adsorption process. Copyright © 2017. Published by Elsevier B.V.

  13. Optimizing Endoscope Reprocessing Resources Via Process Flow Queuing Analysis.

    PubMed

    Seelen, Mark T; Friend, Tynan H; Levine, Wilton C

    2018-05-04

    The Massachusetts General Hospital (MGH) is merging its older endoscope processing facilities into a single new facility that will enable high-level disinfection of endoscopes for both the ORs and Endoscopy Suite, leveraging economies of scale for improved patient care and optimal use of resources. Finalized resource planning was necessary for the merging of facilities to optimize staffing and make final equipment selections to support the nearly 33,000 annual endoscopy cases. To accomplish this, we employed operations management methodologies, analyzing the physical process flow of scopes throughout the existing Endoscopy Suite and ORs and mapping the future state capacity of the new reprocessing facility. Further, our analysis required the incorporation of historical case and reprocessing volumes in a multi-server queuing model to identify any potential wait times as a result of the new reprocessing cycle. We also performed sensitivity analysis to understand the impact of future case volume growth. We found that our future-state reprocessing facility, given planned capital expenditures for automated endoscope reprocessors (AERs) and pre-processing sinks, could easily accommodate current scope volume well within the necessary pre-cleaning-to-sink reprocessing time limit recommended by manufacturers. Further, in its current planned state, our model suggested that the future endoscope reprocessing suite at MGH could support an increase in volume of at least 90% over the next several years. Our work suggests that with simple mathematical analysis of historic case data, significant changes to a complex perioperative environment can be made with ease while keeping patient safety as the top priority.

  14. Information-driven trade and price-volume relationship in artificial stock markets

    NASA Astrophysics Data System (ADS)

    Liu, Xinghua; Liu, Xin; Liang, Xiaobei

    2015-07-01

    The positive relation between stock price changes and trading volume (price-volume relationship) as a stylized fact has attracted significant interest among finance researchers and investment practitioners. However, until now, consensus has not been reached regarding the causes of the relationship based on real market data because extracting valuable variables (such as information-driven trade volume) from real data is difficult. This lack of general consensus motivates us to develop a simple agent-based computational artificial stock market where extracting the necessary variables is easy. Based on this model and its artificial data, our tests have found that the aggressive trading style of informed agents can produce a price-volume relationship. Therefore, the information spreading process is not a necessary condition for producing price-volume relationship.

  15. Radar volume reflectivity estimation using an array of ground-based rainfall drop size detectors

    NASA Astrophysics Data System (ADS)

    Lane, John; Merceret, Francis; Kasparis, Takis; Roy, D.; Muller, Brad; Jones, W. Linwood

    2000-08-01

    Rainfall drop size distribution (DSD) measurements made by single disdrometers at isolated ground sites have traditionally been used to estimate the transformation between weather radar reflectivity Z and rainfall rate R. Despite the immense disparity in sampling geometries, the resulting Z-R relation obtained by these single point measurements has historically been important in the study of applied radar meteorology. Simultaneous DSD measurements made at several ground sites within a microscale area may be used to improve the estimate of radar reflectivity in the air volume surrounding the disdrometer array. By applying the equations of motion for non-interacting hydrometers, a volume estimate of Z is obtained from the array of ground based disdrometers by first calculating a 3D drop size distribution. The 3D-DSD model assumes that only gravity and terminal velocity due to atmospheric drag within the sampling volume influence hydrometer dynamics. The sampling volume is characterized by wind velocities, which are input parameters to the 3D-DSD model, composed of vertical and horizontal components. Reflectivity data from four consecutive WSR-88D volume scans, acquired during a thunderstorm near Melbourne, FL on June 1, 1997, are compared to data processed using the 3D-DSD model and data form three ground based disdrometers of a microscale array.

  16. JANNAF 35th Combustion Subcommittee Meeting. Volume 1

    NASA Technical Reports Server (NTRS)

    Fry, Ronald S. (Editor); Gannaway, Mary T. (Editor); Rognan, Melanie (Editor)

    1998-01-01

    Volume 1, the first of two volumes is a compilation of 63 unclassified/unlimited distribution technical papers presented at the 35th meeting of the Joint Army-Navy-NASA-Air Force (JANNAF) Combustion Subcommittee (CS) held jointly with the 17th Propulsion Systems Hazards Subcommittee (PSHS) and Airbreathing Propulsion Subcommittee (APS). The meeting was held on 7-11 December 1998 at Raytheon Systems Company and the Marriott Hotel, Tucson, AZ. Topics covered include solid gun propellant processing, ignition and combustion, charge concepts, barrel erosion and flash, gun interior ballistics, kinetics and molecular modeling, ETC gun modeling, simulation and diagnostics, and liquid gun propellant combustion; solid rocket motor propellant combustion, combustion instability fundamentals, motor instability, and measurement techniques; and liquid and hybrid rocket combustion.

  17. Multiscale Modeling of Cell Interaction in Angiogenesis: From the Micro- to Macro-scale

    NASA Astrophysics Data System (ADS)

    Pillay, Samara; Maini, Philip; Byrne, Helen

    Solid tumors require a supply of nutrients to grow in size. To this end, tumors induce the growth of new blood vessels from existing vasculature through the process of angiogenesis. In this work, we use a discrete agent-based approach to model the behavior of individual endothelial cells during angiogenesis. We incorporate crowding effects through volume exclusion, motility of cells through biased random walks, and include birth and death processes. We use the transition probabilities associated with the discrete models to determine collective cell behavior, in terms of partial differential equations, using a Markov chain and master equation framework. We find that the cell-level dynamics gives rise to a migrating cell front in the form of a traveling wave on the macro-scale. The behavior of this front depends on the cell interactions that are included and the extent to which volume exclusion is taken into account in the discrete micro-scale model. We also find that well-established continuum models of angiogenesis cannot distinguish between certain types of cell behavior on the micro-scale. This may impact drug development strategies based on these models.

  18. Biostereometric Data Processing In ERGODATA: Choice Of Human Body Models

    NASA Astrophysics Data System (ADS)

    Pineau, J. C.; Mollard, R.; Sauvignon, M.; Amphoux, M.

    1983-07-01

    The definition of human body models was elaborated with anthropometric data from ERGODATA. The first model reduces the human body into a series of points and lines. The second model is well adapted to represent volumes of each segmentary element. The third is an original model built from the conventional anatomical points. Each segment is defined in space by a tri-angular plane located with its 3-D coordinates. This new model can answer all the processing possibilities in the field of computer-aided design (C.A.D.) in ergonomy but also biomechanics and orthopaedics.

  19. A trait-based test for habitat filtering: Convex hull volume

    USGS Publications Warehouse

    Cornwell, W.K.; Schwilk, D.W.; Ackerly, D.D.

    2006-01-01

    Community assembly theory suggests that two processes affect the distribution of trait values within communities: competition and habitat filtering. Within a local community, competition leads to ecological differentiation of coexisting species, while habitat filtering reduces the spread of trait values, reflecting shared ecological tolerances. Many statistical tests for the effects of competition exist in the literature, but measures of habitat filtering are less well-developed. Here, we present convex hull volume, a construct from computational geometry, which provides an n-dimensional measure of the volume of trait space occupied by species in a community. Combined with ecological null models, this measure offers a useful test for habitat filtering. We use convex hull volume and a null model to analyze California woody-plant trait and community data. Our results show that observed plant communities occupy less trait space than expected from random assembly, a result consistent with habitat filtering. ?? 2006 by the Ecological Society of America.

  20. Process Mining Online Assessment Data

    ERIC Educational Resources Information Center

    Pechenizkiy, Mykola; Trcka, Nikola; Vasilyeva, Ekaterina; van der Aalst, Wil; De Bra, Paul

    2009-01-01

    Traditional data mining techniques have been extensively applied to find interesting patterns, build descriptive and predictive models from large volumes of data accumulated through the use of different information systems. The results of data mining can be used for getting a better understanding of the underlying educational processes, for…

  1. Memory, Cognitive Processing, and the Process of "Listening": A Reply to Thomas and Levine.

    ERIC Educational Resources Information Center

    Bostrom, Robert N.

    1996-01-01

    Describes several "inaccurate" statements made in L. Thomas' and T. Levine's article in this journal (volume 21, page 103) regarding the current author's research and positions on the listening construct. Suggests that Thomas' and Levine's model has serious methodological flaws. (RS)

  2. Serpentinization: Getting water into a low permeability peridotite

    NASA Astrophysics Data System (ADS)

    Ulven, Ole Ivar

    2017-04-01

    Fluid consuming rock transformation processes occur in a variety of settings in the Earth's crust. One such process is serpentinization, which involves hydration of ultramafic rock to form serpentine. With peridotite being one of the dominating rocks in the oceanic crust, this process changes physical and chemical properties of the crust at a large scale, increases the amount of water that enters subduction zones, and might even affect plate tectonics te{jamtveit}. A significant number of papers have studied serpentinization in different settings, from reaction fronts progressing over hundreds of meters te{rudge} to the interface scale fracture initiation te{pluemper}. However, the process represents a complicated multi-physics problem which couples external stress, mechanical deformation, volume change, fracture formation, fluid transport, the chemical reaction, heat production and heat flow. Even though it has been argued that fracture formation caused by the volume expansion allows fluid infiltration into the peridotite te{rudge}, it remains unclear how sufficient water can enter the initially low permeability peridotite to pervasively serpentinize the rock at kilometre scale. In this work, we study serpentinization numerically utilizing a thermo-hydro-mechanical model extended with a fluid consuming chemical reaction that increases the rock volume, reduces its density and strength, changes the permeability of the rock, and potentially induces fracture formation. The two-way coupled hydromechanical model is based on a discrete element model (DEM) previously used to study a volume expanding process te{ulven_1,ulven_2} combined with a fluid transport model based on poroelasticity te{ulven_sun}, which is here extended to include fluid unsaturated conditions. Finally, a new model for reactive heat production and heat flow is introduced, to make this probably the first ever fully coupled chemo-thermo-hydromechanical model describing serpentinization. With this model, we are able to improve the understanding of how water is able to penetrate deep into the crust to pervasively serpentinize the initially low permeability peridotite. Jamtveit, B., Austrheim, H., and Putnis, A., ``Disequilibrium metamorphism of stressed lithosphere'', Earth-Sci. Rev. 154, 2016, pp. 1 - 13. Plümper, O., Røyne, A., Magraso, A., and Jamtveit, B., ``The interface-scale mechanism of reaction-induced fracturing during upper mantle serpentinization'', Geology 40, 2012, pp. 1103 - 1106. Rudge, J. F., Kelemen, P. B., and Spiegelman, M., ``A simple model of reaction induced cracking applied to serpentinization and carbonation of peridotite'', Earth Planet. Sc. Lett. 291, 2010, Issues 1-4, pp. 215 - 227. Ulven, O. I., Storheim, H., Austrheim, H., and Malthe-Sørenssen, A., ``Fracture Initiation During Volume Increasing Reactions in Rocks and Applications for CO2 Sequestration'', Earth Planet. Sc. Lett. 389C, 2014a, pp. 132 - 142, doi:10.1016/j.epsl.2013.12.039. Ulven, O. I., Jamtveit, B., and Malthe-Sørenssen, A., ``Reaction-driven fracturing of porous rock'', J. Geophys. Res. Solid Earth 119, 2014b, doi:10.1002/2014JB011102. Ulven, O. I., and Sun, W.C., ``Borehole breakdown studied using a two-way coupling dual-graph lattice model for fluid-driven fracture'', under review.

  3. Global Fleet Station: Station Ship Concept

    DTIC Science & Technology

    2008-02-01

    The basic ISO TEU containers can be designed for any number of configurations and provide many different capabilities. For example there are...Design Design Process The ship was designed using an iterative weight and volume balancing method . This method assigns a weight and volume to each...from existing merchant ships3. Different ship types are modeled in the algorithm though the selection of appropriate non-dimensional factors

  4. Engineer Modeling Study. Volume II. Users Manual.

    DTIC Science & Technology

    1982-09-01

    Distribution Center, Digital Equip- ment Corporation, 1980). The following paragraphs briefly describe each of the major input sections...abbreviation 3. A sequence number for post-processing 4. Clock time 5. Order number pointer (six digits ) 6. Job number pointer (six digits ) 7. Unit number...KIT) Users Manual (Boeing Computer % Services, Inc., 1977). S VAX/VMS Users Manual. Volume 3A (Software Distribution Center, Digital Equipment

  5. Numerical Simulation of Thawing Process of Biological Tissue

    NASA Astrophysics Data System (ADS)

    Momose, Noboru; Tada, Yukio; Hayashi, Yujiro

    Heat transfer and simplified physicochemical model for thawing of the frozen biological cell element consisting of cell and extracellular region was proposed. The melting of intra-and extra-cellular ice, the water transport through cell membrane and other microscale behavior during thawing process were discussed as a function of temperature. Recovery of the cell volume and change of osmotic pressure difference during thawing were clarified theortically in connection with heating velocity, initial cell volume and membrane permeability. Extending this model, the thawing of cellular tissue consisted of numerous cell elements was also simulated. There was a position where osmotic pressure difference became maximum during thawing. Summarizing these results, the thawing damage due to osmotic stress was discussed in relation with the heating operation and the size effect of tissue.

  6. Quantification of micro-CT images of textile reinforcements

    NASA Astrophysics Data System (ADS)

    Straumit, Ilya; Lomov, Stepan V.; Wevers, Martine

    2017-10-01

    VoxTex software (KU Leuven) employs 3D image processing, which use the local directionality information, retrieved using analysis of local structure tensor. The processing results in a voxel 3D array, with each voxel carrying information on (1) material type (matrix; yarn/ply, with identification of the yarn/ply in the reinforcement architecture; void) and (2) fibre direction for fibrous yarns/plies. The knowledge of the material phase volume and known characterisation of the textile structure allows assigning to the voxels (3) fibre volume fraction. This basic voxel model can be further used for different type of the material analysis: Internal geometry and characterisation of defects; permeability; micromechanics; mesoFE voxel models. Apart from the voxel based analysis, approaches to reconstruction of the yarn paths are presented.

  7. CD-ROM publication of the Mars digital cartographic data base

    NASA Technical Reports Server (NTRS)

    Batson, R. M.; Eliason, E. M.; Soderblom, L. A.; Edwards, Kathleen; Wu, Sherman S. C.

    1991-01-01

    The recently completed Mars mosaicked digital image model (MDIM) and the soon-to-be-completed Mars digital terrain model (DTM) are being transcribed to optical disks to simplify distribution to planetary investigators. These models, completed in FY 1991, provide a cartographic base to which all existing Mars data can be registered. The digital image map of Mars is a cartographic extension of a set of compact disk read-only memory (CD-ROM) volumes containing individual Viking Orbiter images now being released. The data in these volumes are pristine in the sense that they were processed only to the extent required to view them as images. They contain the artifacts and the radiometric, geometric, and photometric characteristics of the raw data transmitted by the spacecraft. This new set of volumes, on the other hand, contains cartographic compilations made by processing the raw images to reduce radiometric and geometric distortions and to form geodetically controlled MDIM's. It also contains digitized versions of an airbrushed map of Mars as well as a listing of all feature names approved by the International Astronomical Union. In addition, special geodetic and photogrammetric processing has been performed to derive rasters of topographic data, or DTM's. The latter have a format similar to that of MDIM, except that elevation values are used in the array instead of image brightness values. The set consists of seven volumes: (1) Vastitas Borealis Region of Mars; (2) Xanthe Terra of Mars; (3) Amazonis Planitia Region of Mars; (4) Elysium Planitia Region of Mars; (5) Arabia Terra of Mars; (6) Planum Australe Region of Mars; and (7) a digital topographic map of Mars.

  8. 3D-information fusion from very high resolution satellite sensors

    NASA Astrophysics Data System (ADS)

    Krauss, T.; d'Angelo, P.; Kuschk, G.; Tian, J.; Partovi, T.

    2015-04-01

    In this paper we show the pre-processing and potential for environmental applications of very high resolution (VHR) satellite stereo imagery like these from WorldView-2 or Pl'eiades with ground sampling distances (GSD) of half a metre to a metre. To process such data first a dense digital surface model (DSM) has to be generated. Afterwards from this a digital terrain model (DTM) representing the ground and a so called normalized digital elevation model (nDEM) representing off-ground objects are derived. Combining these elevation based data with a spectral classification allows detection and extraction of objects from the satellite scenes. Beside the object extraction also the DSM and DTM can directly be used for simulation and monitoring of environmental issues. Examples are the simulation of floodings, building-volume and people estimation, simulation of noise from roads, wave-propagation for cellphones, wind and light for estimating renewable energy sources, 3D change detection, earthquake preparedness and crisis relief, urban development and sprawl of informal settlements and much more. Also outside of urban areas volume information brings literally a new dimension to earth oberservation tasks like the volume estimations of forests and illegal logging, volume of (illegal) open pit mining activities, estimation of flooding or tsunami risks, dike planning, etc. In this paper we present the preprocessing from the original level-1 satellite data to digital surface models (DSMs), corresponding VHR ortho images and derived digital terrain models (DTMs). From these components we present how a monitoring and decision fusion based 3D change detection can be realized by using different acquisitions. The results are analyzed and assessed to derive quality parameters for the presented method. Finally the usability of 3D information fusion from VHR satellite imagery is discussed and evaluated.

  9. Using stereo satellite imagery to account for ablation, entrainment, and compaction in volume calculations for rock avalanches on Glaciers: Application to the 2016 Lamplugh Rock Avalanche in Glacier Bay National Park, Alaska

    USGS Publications Warehouse

    Bessette-Kirton, Erin; Coe, Jeffrey A.; Zhou, Wendy

    2018-01-01

    The use of preevent and postevent digital elevation models (DEMs) to estimate the volume of rock avalanches on glaciers is complicated by ablation of ice before and after the rock avalanche, scour of material during rock avalanche emplacement, and postevent ablation and compaction of the rock avalanche deposit. We present a model to account for these processes in volume estimates of rock avalanches on glaciers. We applied our model by calculating the volume of the 28 June 2016 Lamplugh rock avalanche in Glacier Bay National Park, Alaska. We derived preevent and postevent 2‐m resolution DEMs from WorldView satellite stereo imagery. Using data from DEM differencing, we reconstructed the rock avalanche and adjacent surfaces at the time of occurrence by accounting for elevation changes due to ablation and scour of the ice surface, and postevent deposit changes. We accounted for uncertainties in our DEMs through precise coregistration and an assessment of relative elevation accuracy in bedrock control areas. The rock avalanche initially displaced 51.7 ± 1.5 Mm3 of intact rock and then scoured and entrained 13.2 ± 2.2 Mm3 of snow and ice during emplacement. We calculated the total deposit volume to be 69.9 ± 7.9 Mm3. Volume estimates that did not account for topographic changes due to ablation, scour, and compaction underestimated the deposit volume by 31.0–46.8 Mm3. Our model provides an improved framework for estimating uncertainties affecting rock avalanche volume measurements in glacial environments. These improvements can contribute to advances in the understanding of rock avalanche hazards and dynamics.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Yueying; Kruger, Albert A.

    The Hanford Tank Waste Treatment and Immobilization Plant (WTP) Statement of Work (Department of Energy Contract DE-AC27-01RV14136, Section C) requires the contractor to develop and use process models for flowsheet analyses and pre-operational planning assessments. The Dynamic (G2) Flowsheet is a discrete-time process model that enables the project to evaluate impacts to throughput from eventdriven activities such as pumping, sampling, storage, recycle, separation, and chemical reactions. The model is developed by the Process Engineering (PE) department, and is based on the Flowsheet Bases, Assumptions, and Requirements Document (24590-WTP-RPT-PT-02-005), commonly called the BARD. The terminologies of Dynamic (G2) Flowsheet and Dynamicmore » (G2) Model are interchangeable in this document. The foundation of this model is a dynamic material balance governed by prescribed initial conditions, boundary conditions, and operating logic. The dynamic material balance is achieved by tracking the storage and material flows within the plant as time increments. The initial conditions include a feed vector that represents the waste compositions and delivery sequence of the Tank Farm batches, and volumes and concentrations of solutions in process equipment before startup. The boundary conditions are the physical limits of the flowsheet design, such as piping, volumes, flowrates, operation efficiencies, and physical and chemical environments that impact separations, phase equilibriums, and reaction extents. The operating logic represents the rules and strategies of running the plant.« less

  11. Controls on Martian Hydrothermal Systems: Application to Valley Network and Magnetic Anomaly Formation

    NASA Technical Reports Server (NTRS)

    Harrison, Keith P.; Grimm, Robert E.

    2002-01-01

    Models of hydrothermal groundwater circulation can quantify limits to the role of hydrothermal activity in Martian crustal processes. We present here the results of numerical simulations of convection in a porous medium due to the presence of a hot intruded magma chamber. The parameter space includes magma chamber depth, volume, aspect ratio, and host rock permeability and porosity. A primary goal of the models is the computation of surface discharge. Discharge increases approximately linearly with chamber volume, decreases weakly with depth (at low geothermal gradients), and is maximized for equant-shaped chambers. Discharge increases linearly with permeability until limited by the energy available from the intrusion. Changes in the average porosity are balanced by changes in flow velocity and therefore have little effect. Water/rock ratios of approximately 0.1, obtained by other workers from models based on the mineralogy of the Shergotty meteorite, imply minimum permeabilities of 10(exp -16) sq m2 during hydrothermal alteration. If substantial vapor volumes are required for soil alteration, the permeability must exceed 10(exp -15) sq m. The principal application of our model is to test the viability of hydrothermal circulation as the primary process responsible for the broad spatial correlation of Martian valley networks with magnetic anomalies. For host rock permeabilities as low as 10(exp -17) sq m and intrusion volumes as low as 50 cu km, the total discharge due to intrusions building that part of the southern highlands crust associated with magnetic anomalies spans a comparable range as the inferred discharge from the overlying valley networks.

  12. [A simple model for describing pressure-volume curves in free balloon dilatation with reference the dynamics of inflation hydraulic aspects].

    PubMed

    Bloss, P; Werner, C

    2000-06-01

    We propose a simple model to describe pressure-time and pressure-volume curves for the free balloon (balloon in air) of balloon catheters, taking into account the dynamics of the inflation device. On the basis of our investigations of the flow rate-dependence of characteristic parameters of the pressure-time curves, the appropriateness of this simple model is demonstrated using a representative example. Basic considerations lead to the following assumptions: (1) the flow within the shaft of the catheter is laminar, and (ii) the volume decrease of the liquid used for inflation due to pressurization can be neglected if the liquid is carefully degassed prior to inflation, and if the total volume of the liquid in the system is less than 2 ml. Taking into account the dynamics of the inflation device used for pumping the liquid into the proximal end of the shaft during inflation, the inflation process can be subdivided into the following three phases: initial phase, filling phase and dilatation phase. For these three phases, the transformation of the time into the volume coordinates is given. On the basis of our model, the following parameters of the balloon catheter can be determined from a measured pressure-time curve: (1) the resistance to flow of the liquid through the shaft of the catheter and the resulting pressure drop across the shaft, (2) the residual volume and residual pressure of the balloon, and (3) the volume compliance of the balloon catheter with and without the inflation device.

  13. Multivariate regression model for predicting lumber grade volumes of northern red oak sawlogs

    Treesearch

    Daniel A. Yaussy; Robert L. Brisbin

    1983-01-01

    A multivariate regression model was developed to predict green board-foot yields for the seven common factory lumber grades processed from northern red oak (Quercus rubra L.) factory grade logs. The model uses the standard log measurements of grade, scaling diameter, length, and percent defect. It was validated with an independent data set. The model...

  14. (Working) Memory and L2 Acquisition and Processing

    ERIC Educational Resources Information Center

    Rankin, Tom

    2017-01-01

    This review evaluates two recent anthologies that survey research at the intersection of cognitive psychological investigations of (working) memory and issues in second language (L2), and bilingual processing and acquisition. The volumes cover similar ground by outlining the theoretical underpinnings of models of (working) memory as well as…

  15. Coal conversion systems design and process modeling. Volume 1: Application of MPPR and Aspen computer models

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The development of a coal gasification system design and mass and energy balance simulation program for the TVA and other similar facilities is described. The materials-process-product model (MPPM) and the advanced system for process engineering (ASPEN) computer program were selected from available steady state and dynamic models. The MPPM was selected to serve as the basis for development of system level design model structure because it provided the capability for process block material and energy balance and high-level systems sizing and costing. The ASPEN simulation serves as the basis for assessing detailed component models for the system design modeling program. The ASPEN components were analyzed to identify particular process blocks and data packages (physical properties) which could be extracted and used in the system design modeling program. While ASPEN physical properties calculation routines are capable of generating physical properties required for process simulation, not all required physical property data are available, and must be user-entered.

  16. Application of Local Discretization Methods in the NASA Finite-Volume General Circulation Model

    NASA Technical Reports Server (NTRS)

    Yeh, Kao-San; Lin, Shian-Jiann; Rood, Richard B.

    2002-01-01

    We present the basic ideas of the dynamics system of the finite-volume General Circulation Model developed at NASA Goddard Space Flight Center for climate simulations and other applications in meteorology. The dynamics of this model is designed with emphases on conservative and monotonic transport, where the property of Lagrangian conservation is used to maintain the physical consistency of the computational fluid for long-term simulations. As the model benefits from the noise-free solutions of monotonic finite-volume transport schemes, the property of Lagrangian conservation also partly compensates the accuracy of transport for the diffusion effects due to the treatment of monotonicity. By faithfully maintaining the fundamental laws of physics during the computation, this model is able to achieve sufficient accuracy for the global consistency of climate processes. Because the computing algorithms are based on local memory, this model has the advantage of efficiency in parallel computation with distributed memory. Further research is yet desirable to reduce the diffusion effects of monotonic transport for better accuracy, and to mitigate the limitation due to fast-moving gravity waves for better efficiency.

  17. Dirichlet Process Gaussian-mixture model: An application to localizing coalescing binary neutron stars with gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Del Pozzo, W.; Berry, C. P. L.; Ghosh, A.; Haines, T. S. F.; Singer, L. P.; Vecchio, A.

    2018-06-01

    We reconstruct posterior distributions for the position (sky area and distance) of a simulated set of binary neutron-star gravitational-waves signals observed with Advanced LIGO and Advanced Virgo. We use a Dirichlet Process Gaussian-mixture model, a fully Bayesian non-parametric method that can be used to estimate probability density functions with a flexible set of assumptions. The ability to reliably reconstruct the source position is important for multimessenger astronomy, as recently demonstrated with GW170817. We show that for detector networks comparable to the early operation of Advanced LIGO and Advanced Virgo, typical localization volumes are ˜104-105 Mpc3 corresponding to ˜102-103 potential host galaxies. The localization volume is a strong function of the network signal-to-noise ratio, scaling roughly ∝ϱnet-6. Fractional localizations improve with the addition of further detectors to the network. Our Dirichlet Process Gaussian-mixture model can be adopted for localizing events detected during future gravitational-wave observing runs, and used to facilitate prompt multimessenger follow-up.

  18. Utilization of BIM for automation of quantity takeoffs and cost estimation in transport infrastructure construction projects in the Czech Republic

    NASA Astrophysics Data System (ADS)

    Vitásek, Stanislav; Matějka, Petr

    2017-09-01

    The article deals with problematic parts of automated processing of quantity takeoff (QTO) from data generated in BIM model. It focuses on models of road constructions, and uses volumes and dimensions of excavation work to create an estimate of construction costs. The article uses a case study and explorative methods to discuss possibilities and problems of data transfer from a model to a price system of construction production when such transfer is used for price estimates of construction works. Current QTOs and price tenders are made with 2D documents. This process is becoming obsolete because more modern tools can be used. The BIM phenomenon enables partial automation in processing volumes and dimensions of construction units and matching the data to units in a given price scheme. Therefore price of construction can be estimated and structured without lengthy and often imprecise manual calculations. The use of BIM for QTO is highly dependent on local market budgeting systems, therefore proper push/pull strategy is required. It also requires proper requirements specification, compatible pricing database and software.

  19. SIMULATION MODEL FOR WATERSHED MANAGEMENT PLANNING. VOLUME 1. MODEL THEORY AND FORMULATION

    EPA Science Inventory

    Evaluation of nonpoint source pollution problems requires an understanding of the behavioral response to an ecosystem to the impacts of land use activities on individual components of that ecosystem. By analyzing basic ecosystem processes and impacts of land use activities on spe...

  20. Verification of a two-dimensional infiltration model for the resin transfer molding process

    NASA Technical Reports Server (NTRS)

    Hammond, Vincent H.; Loos, Alfred C.; Dexter, H. Benson; Hasko, Gregory H.

    1993-01-01

    A two-dimensional finite element model for the infiltration of a dry textile preform by an injected resin was verified. The model, which is based on the finite element/control volume technique, determines the total infiltration time and the pressure increase at the mold inlet associated with the RTM process. Important input data for the model are the compaction and permeability behavior of the preform along with the kinetic and rheological behavior of the resin. The compaction behavior for several textile preforms was determined by experimental methods. A power law regression model was used to relate fiber volume fraction to the applied compaction pressure. Results showed a large increase in fiber volume fraction with the initial application of pressure. However, as the maximum fiber volume fraction was approached, the amount of compaction pressure required to decrease the porosity of the preform rapidly increased. Similarly, a power law regression model was used to relate permeability to the fiber volume fraction of the preform. Two methods were used to measure the permeability of the textile preform. The first, known as the steady state method, measures the permeability of a saturated preform under constant flow rate conditions. The second, denoted the advancing front method, determines the permeability of a dry preform to an infiltrating fluid. Water, corn oil, and an epoxy resin, Epon 815, were used to determine the effect of fluid type and viscosity on the steady state permeability behavior of the preform. Permeability values measured with the different fluids showed that fluid viscosity had no influence on the permeability behavior of 162 E-glass and TTI IM7/8HS preforms. Permeabilities measured from steady state and advancing front experiments for the warp direction of 162 E-glass fabric were similar. This behavior was noticed for tests conducted with corn oil and Epon 815. Comparable behavior was observed for the warp direction of the TTI IM7/8HS preform and corn oil. Mold filling and flow visualization experiments were performed to verify the analytical computer model. Frequency dependent electromagnetic sensors were used to monitor the resin flow front as a function of time. For the flow visualization tests, a video camera and high resolution tape recorder were used to record the experimental flow fronts. Comparisons between experimental and model predicted flow fronts agreed well for all tests. For the mold filling tests conducted at constant flow rate injection, the model was able to accurately predict the pressure increase at the mold inlet during the infiltration process. A kinetics model developed to predict the degree of cure as a function of time for the injected resin accurately calculated the increase in the degree of cure during the subsequent cure cycle.

  1. A 2D modeling approach for fluid propagation during FE-forming simulation of continuously reinforced composites in wet compression moulding

    NASA Astrophysics Data System (ADS)

    Poppe, Christian; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Wet compression moulding (WCM) provides large-scale production potential for continuously fiber reinforced components as a promising alternative to resin transfer moulding (RTM). Lower cycle times are possible due to parallelization of the process steps draping, infiltration and curing during moulding (viscous draping). Experimental and theoretical investigations indicate a strong mutual dependency between the physical mechanisms, which occur during draping and mould filling (fluid-structure-interaction). Thus, key process parameters, like fiber orientation, fiber volume fraction, cavity pressure and the amount and viscosity of the resin are physically coupled. To enable time and cost efficient product and process development throughout all design stages, accurate process simulation tools are desirable. Separated draping and mould filling simulation models, as appropriate for the sequential RTM-process, cannot be applied for the WCM process due to the above outlined physical couplings. Within this study, a two-dimensional Darcy-Propagation-Element (DPE-2D) based on a finite element formulation with additional control volumes (FE/CV) is presented, verified and applied to forming simulation of a generic geometry, as a first step towards a fluid-structure-interaction model taking into account simultaneous resin infiltration and draping. The model is implemented in the commercial FE-Solver Abaqus by means of several user subroutines considering simultaneous draping and 2D-infiltration mechanisms. Darcy's equation is solved with respect to a local fiber orientation. Furthermore, the material model can access the local fluid domain properties to update the mechanical forming material parameter, which enables further investigations on the coupled physical mechanisms.

  2. Oxygen production System Models for Lunar ISRU

    NASA Technical Reports Server (NTRS)

    Santiago-Maldonado, Edgardo

    2007-01-01

    In-Situ Resource Utilization (ISRU) seeks to make human space exploration feasible; by using available resources from a planet or the moon to produce consumables, parts, and structures that otherwise would be brought from Earth. Producing these in situ reduces the mass of such that must be launched and doing so allows more payload mass' for each mission. The production of oxygen from lunar regolith, for life support and propellant, is one of the tasks being studied under ISRU. NASA is currently funding three processes that have shown technical merit for the production of oxygen from regolith: Molten Salt Electrolysis, Hydrogen Reduction of Ilmenite, and Carbothermal Reduction. The ISRU program is currently developing system models of, the , abovementioned processes to: (1) help NASA in the evaluation process to select the most cost-effective and efficient process for further prototype development, (2) identify key parameters, (3) optimize the oxygen production process, (4) provide estimates on energy and power requirements, mass and volume.of the system, oxygen production rate, mass of regolith required, mass of consumables, and other important parameters, and (5) integrate into the overall end-to-end ISRU system model, which could be integrated with mission architecture models. The oxygen production system model is divided into modules that represent unit operations (e.g., reactor, water electrolyzer, heat exchanger). Each module is modeled theoretically using Excel and Visual Basic for Applications (VBA), and will be validated using experimental data from on-going laboratory work. This modularity (plug-n-play) feature of each unit operation allows the use of the same model on different oxygen production systems simulations resulting in comparable results. In this presentation, preliminary results for mass, power, volume will be presented along with brief description of the oxygen production system model.

  3. Short-time-scale left ventricular systolic dynamics. Evidence for a common mechanism in both left ventricular chamber and heart muscle mechanics.

    PubMed

    Campbell, K B; Shroff, S G; Kirkpatrick, R D

    1991-06-01

    Based on the premise that short-time-scale, small-amplitude pressure/volume/outflow behavior of the left ventricular chamber was dominated by dynamic processes originating in cardiac myofilaments, a prototype model was built to predict pressure responses to volume perturbations. In the model, chamber pressure was taken to be the product of the number of generators in a pressure-bearing state and their average volumetric distortion, as in the muscle theory of A.F. Huxley, in which force was equal to the number of attached crossbridges and their average lineal distortion. Further, as in the muscle theory, pressure generators were assumed to cycle between two states, the pressure-bearing state and the non-pressure-bearing state. Experiments were performed in the isolated ferret heart, where variable volume decrements (0.01-0.12 ml) were removed at two commanded flow rates (flow clamps, -7 and -14 ml/sec). Pressure responses to volume removals were analyzed. Although the prototype model accounted for most features of the pressure responses, subtle but systematic discrepancies were observed. The presence or absence of flow and the magnitude of flow affected estimates of model parameters. However, estimates of parameters did not differ when the model was fitted to flow clamps with similar magnitudes of flows but different volume changes. Thus, prototype model inadequacies were attributed to misrepresentations of flow-related effects but not of volume-related effects. Based on these discrepancies, an improved model was built that added to the simple two-state cycling scheme, a pathway to a third state. This path was followed only in response to volume change. The improved model eliminated the deficiencies of the prototype model and was adequate in accounting for all observations. Since the template for the improved model was taken from the cycling crossbridge theory of muscle contraction, it was concluded that, in spite of the complexities of geometry, architecture, and regional heterogeneity of function and structure, crossbridge mechanisms dominated the short-time-scale dynamics of left ventricular chamber behavior.

  4. Melt Electrospinning Writing of Highly Ordered Large Volume Scaffold Architectures.

    PubMed

    Wunner, Felix M; Wille, Marie-Luise; Noonan, Thomas G; Bas, Onur; Dalton, Paul D; De-Juan-Pardo, Elena M; Hutmacher, Dietmar W

    2018-05-01

    The additive manufacturing of highly ordered, micrometer-scale scaffolds is at the forefront of tissue engineering and regenerative medicine research. The fabrication of scaffolds for the regeneration of larger tissue volumes, in particular, remains a major challenge. A technology at the convergence of additive manufacturing and electrospinning-melt electrospinning writing (MEW)-is also limited in thickness/volume due to the accumulation of excess charge from the deposited material repelling and hence, distorting scaffold architectures. The underlying physical principles are studied that constrain MEW of thick, large volume scaffolds. Through computational modeling, numerical values variable working distances are established respectively, which maintain the electrostatic force at a constant level during the printing process. Based on the computational simulations, three voltage profiles are applied to determine the maximum height (exceeding 7 mm) of a highly ordered large volume scaffold. These thick MEW scaffolds have fully interconnected pores and allow cells to migrate and proliferate. To the best of the authors knowledge, this is the first study to report that z-axis adjustment and increasing the voltage during the MEW process allows for the fabrication of high-volume scaffolds with uniform morphologies and fiber diameters. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Linking dynamics of transport timescale and variations of hypoxia in the Chesapeake Bay

    NASA Astrophysics Data System (ADS)

    Hong, Bo; Shen, Jian

    2013-11-01

    Dissolved oxygen (DO) replenishment in the bottom waters of an estuary depends on physical processes that are significantly influenced by external forcings. The vertical exchange time (VET) is introduced in this study to quantify the physical processes that regulate the DO replenishment in the Chesapeake Bay. A 3-D numerical model was applied to simulate the circulation, VET, and DO. Results indicate that VET is a suitable parameter for evaluating the bottom DO condition over both seasonal and interannual timescales. The VET is negatively correlated with the bottom DO. Hypoxia (DO <2 mg L-1) will develop in the Bay when VET is greater than 23 days in summer if mean total DO consumption rate is about 0.3 g O2 m-3 d-1. This critical VET value may vary around 23 days when the total DO consumption rate changes. The VET volume (volume of water mass with VET >23 days) can account for 77% of variations of hypoxic volume in the main Bay. The VET cannot explain all the DO variations as it can only account for the contribution of physical processes that regulate DO replenishment. It is found that the short-term vertical exchange process is highly controlled by the wind forcing. The VET volume decreases when the high-speed wind events are frequent. The summertime VET volume is less sensitive to short-term variations (pulses) of river discharge. It is sensitive to the total amount of river discharge and the high VET volume can be expected in the wet year.

  6. Modeling the Controlled Recrystallization of Particle-Containing Aluminum Alloys

    NASA Astrophysics Data System (ADS)

    Adam, Khaled; Root, Jameson M.; Long, Zhengdong; Field, David P.

    2017-01-01

    The recrystallized fraction for AA7050 during the solution heat treatment is highly dependent upon the history of deformation during thermomechanical processing. In this work, a state variable model was developed to predict the recrystallization volume fraction as a function of processing parameters. Particle stimulated nucleation (PSN) was observed as a dominant mechanism of recrystallization in AA7050. The mesoscale Monte Carlo Potts model was used to simulate the evolved microstructure during static recrystallization with the given recrystallization fraction determined already by the state variable model for AA7050 alloy. The spatial inhomogeneity of nucleation is obtained from the measurement of the actual second-phase particle distribution in the matrix identified using backscattered electron (BSE) imaging. The state variable model showed good fit with the experimental results, and the simulated microstructures were quantitatively comparable to the experimental results for the PSN recrystallized microstructure of 7050 aluminum alloy. It was also found that the volume fraction of recrystallization did not proceed as dictated by the Avrami equation in this alloy because of the presence of the growth inhibitors.

  7. Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.

    PubMed

    Blutke, Andreas; Wanke, Rüdiger

    2018-03-06

    In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical uniform random (VUR) sections.

  8. Optimizing separate phase light hydrocarbon recovery from contaminated unconfined aquifers

    NASA Astrophysics Data System (ADS)

    Cooper, Grant S.; Peralta, Richard C.; Kaluarachchi, Jagath J.

    A modeling approach is presented that optimizes separate phase recovery of light non-aqueous phase liquids (LNAPL) for a single dual-extraction well in a homogeneous, isotropic unconfined aquifer. A simulation/regression/optimization (S/R/O) model is developed to predict, analyze, and optimize the oil recovery process. The approach combines detailed simulation, nonlinear regression, and optimization. The S/R/O model utilizes nonlinear regression equations describing system response to time-varying water pumping and oil skimming. Regression equations are developed for residual oil volume and free oil volume. The S/R/O model determines optimized time-varying (stepwise) pumping rates which minimize residual oil volume and maximize free oil recovery while causing free oil volume to decrease a specified amount. This S/R/O modeling approach implicitly immobilizes the free product plume by reversing the water table gradient while achieving containment. Application to a simple representative problem illustrates the S/R/O model utility for problem analysis and remediation design. When compared with the best steady pumping strategies, the optimal stepwise pumping strategy improves free oil recovery by 11.5% and reduces the amount of residual oil left in the system due to pumping by 15%. The S/R/O model approach offers promise for enhancing the design of free phase LNAPL recovery systems and to help in making cost-effective operation and management decisions for hydrogeologists, engineers, and regulators.

  9. Original predictive approach to the compressibility of pharmaceutical powder mixtures based on the Kawakita equation.

    PubMed

    Mazel, Vincent; Busignies, Virginie; Duca, Stéphane; Leclerc, Bernard; Tchoreloff, Pierre

    2011-05-30

    In the pharmaceutical industry, tablets are obtained by the compaction of two or more components which have different physical properties and compaction behaviours. Therefore, it could be interesting to predict the physical properties of the mixture using the single-component results. In this paper, we have focused on the prediction of the compressibility of binary mixtures using the Kawakita model. Microcrystalline cellulose (MCC) and L-alanine were compacted alone and mixed at different weight fractions. The volume reduction, as a function of the compaction pressure, was acquired during the compaction process ("in-die") and after elastic recovery ("out-of-die"). For the pure components, the Kawakita model is well suited to the description of the volume reduction. For binary mixtures, an original approach for the prediction of the volume reduction without using the effective Kawakita parameters was proposed and tested. The good agreement between experimental and predicted data proved that this model was efficient to predict the volume reduction of MCC and L-alanine mixtures during compaction experiments. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Cerebellar contribution to motor and cognitive performance in multiple sclerosis: An MRI sub-regional volumetric analysis.

    PubMed

    D'Ambrosio, Alessandro; Pagani, Elisabetta; Riccitelli, Gianna C; Colombo, Bruno; Rodegher, Mariaemma; Falini, Andrea; Comi, Giancarlo; Filippi, Massimo; Rocca, Maria A

    2017-08-01

    To investigate the role of cerebellar sub-regions on motor and cognitive performance in multiple sclerosis (MS) patients. Whole and sub-regional cerebellar volumes, brain volumes, T2 hyperintense lesion volumes (LV), and motor performance scores were obtained from 95 relapse-onset MS patients and 32 healthy controls (HC). MS patients also underwent an evaluation of working memory and processing speed functions. Cerebellar anterior and posterior lobes were segmented using the Spatially Unbiased Infratentorial Toolbox (SUIT) from Statistical Parametric Mapping (SPM12). Multivariate linear regression models assessed the relationship between magnetic resonance imaging (MRI) measures and motor/cognitive scores. Compared to HC, only secondary progressive multiple sclerosis (SPMS) patients had lower cerebellar volumes (total and posterior cerebellum). In MS patients, lower anterior cerebellar volume and brain T2 LV predicted worse motor performance, whereas lower posterior cerebellar volume and brain T2 LV predicted poor cognitive performance. Global measures of brain volume and infratentorial T2 LV were not selected by the final multivariate models. Cerebellar volumetric abnormalities are likely to play an important contribution to explain motor and cognitive performance in MS patients. Consistently with functional mapping studies, cerebellar posterior-inferior volume accounted for variance in cognitive measures, whereas anterior cerebellar volume accounted for variance in motor performance, supporting the assessment of cerebellar damage at sub-regional level.

  11. Design of Linear-Quadratic-Regulator for a CSTR process

    NASA Astrophysics Data System (ADS)

    Meghna, P. R.; Saranya, V.; Jaganatha Pandian, B.

    2017-11-01

    This paper aims at creating a Linear Quadratic Regulator (LQR) for a Continuous Stirred Tank Reactor (CSTR). A CSTR is a common process used in chemical industries. It is a highly non-linear system. Therefore, in order to create the gain feedback controller, the model is linearized. The controller is designed for the linearized model and the concentration and volume of the liquid in the reactor are kept at a constant value as required.

  12. Numerical analysis of the formation process of aerosols in the alveoli

    NASA Astrophysics Data System (ADS)

    Haslbeck, Karsten; Seume, Jörg R.

    2008-11-01

    For a successful diagnosis of lung diseases through an analysis of non-volatile molecules in the exhaled breath, an exact understanding of the aerosol formation process is required. This process is modeled using Computational Fluid Dynamics (CFD). The model shows the interaction of the boundary surface between the streamed airway and the local epithelial liquid layer. A 2-D volume mesh of an alveolus is generated by taking into account the connection of the alveoli with the sacculi alveolares (SA). The Volume of Fluid (VOF) Method is used to model the interface between the gas and the liquid film. The non-Newtonian flow is modeled by the implementation of the Ostwald de Waele model. Surface tension is a function of the surfactant concentration. The VOF-Method allows the distribution of the concentration of the epithelial liquid layer at the surface to be traced in a transient manner. The simulations show the rupturing of the liquid film through the drop formation. Aerosol particles are ejected into the SA and do not collide with the walls. The quantity, the geometrical size as well as the velocity distributions of the generated aerosols are determined. The data presented in the paper provide the boundary conditions for future CFD analysis of the aerosol transport through the airways up to exhalation.

  13. Integrated firn elevation change model for glaciers and ice caps

    NASA Astrophysics Data System (ADS)

    Saß, Björn; Sauter, Tobias; Braun, Matthias

    2016-04-01

    We present the development of a firn compaction model in order to improve the volume to mass conversion of geodetic glacier mass balance measurements. The model is applied on the Arctic ice cap Vestfonna. Vestfonna is located on the island Nordaustlandet in the north east of Svalbard. Vestfonna covers about 2400 km² and has a dome like shape with well-defined outlet glaciers. Elevation and volume changes measured by e.g. satellite techniques are becoming more and more popular. They are carried out over observation periods of variable length and often covering different meteorological and snow hydrological regimes. The elevation change measurements compose of various components including dynamic adjustments, firn compaction and mass loss by downwasting. Currently, geodetic glacier mass balances are frequently converted from elevation change measurements using a constant conversion factor of 850 kg m-³ or the density of ice (917 kg m-³) for entire glacier basins. However, the natural conditions are rarely that static. Other studies used constant densities for the ablation (900 kg m-³) and accumulation (600 kg m-³) areas, whereby density variations with varying meteorological and climate conditions are not considered. Hence, each approach bears additional uncertainties from the volume to mass conversion that are strongly affected by the type and timing of the repeat measurements. We link and adapt existing models of surface energy balance, accumulation and snow and firn processes in order to improve the volume to mass conversion by considering the firn compaction component. Energy exchange at the surface is computed by a surface energy balance approach and driven by meteorological variables like incoming short-wave radiation, air temperature, relative humidity, air pressure, wind speed, all-phase precipitation, and cloud cover fraction. Snow and firn processes are addressed by a coupled subsurface model, implemented with a non-equidistant layer discretisation. On our poster we present a general view on the model structure, the input data (model forcing) and finally, an exemplary test case with basic approaches of validation.

  14. Coupling volume-excluding compartment-based models of diffusion at different scales: Voronoi and pseudo-compartment approaches

    PubMed Central

    Taylor, P. R.; Baker, R. E.; Simpson, M. J.; Yates, C. A.

    2016-01-01

    Numerous processes across both the physical and biological sciences are driven by diffusion. Partial differential equations are a popular tool for modelling such phenomena deterministically, but it is often necessary to use stochastic models to accurately capture the behaviour of a system, especially when the number of diffusing particles is low. The stochastic models we consider in this paper are ‘compartment-based’: the domain is discretized into compartments, and particles can jump between these compartments. Volume-excluding effects (crowding) can be incorporated by blocking movement with some probability. Recent work has established the connection between fine- and coarse-grained models incorporating volume exclusion, but only for uniform lattices. In this paper, we consider non-uniform, hybrid lattices that incorporate both fine- and coarse-grained regions, and present two different approaches to describe the interface of the regions. We test both techniques in a range of scenarios to establish their accuracy, benchmarking against fine-grained models, and show that the hybrid models developed in this paper can be significantly faster to simulate than the fine-grained models in certain situations and are at least as fast otherwise. PMID:27383421

  15. Travaux Neuchatelois de Linguistique (TRANEL) (Neuchatel Working Papers in Linguistics), Volume 14.

    ERIC Educational Resources Information Center

    Py, Bernard, Ed.; Rubattel, Christian, Ed.

    1989-01-01

    Three papers in linguistics, all in French, are presented. "La delocutivite lexicale en francais standard: esquisse d'un modele derivationnel" ("Lexical Delocutivity in Standard French: Sketch of a Derivational Model"), by Marc Bonhomme, examines the process by which certain expressions become neologisms. "La terminologie…

  16. End-to-end workflow for finite element analysis of tumor treating fields in glioblastomas

    NASA Astrophysics Data System (ADS)

    Timmons, Joshua J.; Lok, Edwin; San, Pyay; Bui, Kevin; Wong, Eric T.

    2017-11-01

    Tumor Treating Fields (TTFields) therapy is an approved modality of treatment for glioblastoma. Patient anatomy-based finite element analysis (FEA) has the potential to reveal not only how these fields affect tumor control but also how to improve efficacy. While the automated tools for segmentation speed up the generation of FEA models, multi-step manual corrections are required, including removal of disconnected voxels, incorporation of unsegmented structures and the addition of 36 electrodes plus gel layers matching the TTFields transducers. Existing approaches are also not scalable for the high throughput analysis of large patient volumes. A semi-automated workflow was developed to prepare FEA models for TTFields mapping in the human brain. Magnetic resonance imaging (MRI) pre-processing, segmentation, electrode and gel placement, and post-processing were all automated. The material properties of each tissue were applied to their corresponding mask in silico using COMSOL Multiphysics (COMSOL, Burlington, MA, USA). The fidelity of the segmentations with and without post-processing was compared against the full semi-automated segmentation workflow approach using Dice coefficient analysis. The average relative differences for the electric fields generated by COMSOL were calculated in addition to observed differences in electric field-volume histograms. Furthermore, the mesh file formats in MPHTXT and NASTRAN were also compared using the differences in the electric field-volume histogram. The Dice coefficient was less for auto-segmentation without versus auto-segmentation with post-processing, indicating convergence on a manually corrected model. An existent but marginal relative difference of electric field maps from models with manual correction versus those without was identified, and a clear advantage of using the NASTRAN mesh file format was found. The software and workflow outlined in this article may be used to accelerate the investigation of TTFields in glioblastoma patients by facilitating the creation of FEA models derived from patient MRI datasets.

  17. Influence of temperature on the single-stage ATAD process predicted by a thermal equilibrium model.

    PubMed

    Cheng, Jiehong; Zhu, Jun; Kong, Feng; Zhang, Chunyong

    2015-06-01

    Autothermal thermophilic aerobic digestion (ATAD) is a promising biological process that will produce an effluent satisfying the Class A requirements on pathogen control and land application. The thermophilic temperature in an ATAD reactor is one of the critical factors that can affect the satisfactory operation of the ATAD process. This paper established a thermal equilibrium model to predict the effect of variables on the auto-rising temperature in an ATAD system. The reactors with volumes smaller than 10 m(3) could not achieve temperatures higher than 45 °C under ambient temperature of -5 °C. The results showed that for small reactors, the reactor volume played a key role in promoting auto-rising temperature in the winter. Thermophilic temperature achieved in small ATAD reactors did not entirely depend on the heat release from biological activities during degrading organic matters in sludges, but was related to the ambient temperature. The ratios of surface area-to-effective volume less than 2.0 had less impact on the auto-rising temperature of an ATAD reactor. The influence of ambient temperature on the auto-rising reactor temperature decreased with increasing reactor volumes. High oxygen transfer efficiency had a significant influence on the internal temperature rise in an ATAD system, indicating that improving the oxygen transfer efficiency of aeration devices was a key factor to achieve a higher removal rate of volatile solids (VS) during the ATAD process operation. Compared with aeration using cold air, hot air demonstrated a significant effect on maintaining the internal temperature (usually 4-5 °C higher). Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. A NASTRAN model of a large flexible swing-wing bomber. Volume 5: NASTRAN model development-fairing structure

    NASA Technical Reports Server (NTRS)

    Mock, W. D.; Latham, R. A.

    1982-01-01

    The NASTRAN model plan for the fairing structure was expanded in detail to generate the NASTRAN model of this substructure. The grid point coordinates, element definitions, material properties, and sizing data for each element were specified. The fairing model was thoroughly checked out for continuity, connectivity, and constraints. The substructure was processed for structural influence coefficients (SIC) point loadings to determine the deflection characteristics of the fairing model. Finally, a demonstration and validation processing of this substructure was accomplished using the NASTRAN finite element program. The bulk data deck, stiffness matrices, and SIC output data were delivered.

  19. Macroscopic balance model for wave rotors

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.

    1996-01-01

    A mathematical model for multi-port wave rotors is described. The wave processes that effect energy exchange within the rotor passage are modeled using one-dimensional gas dynamics. Macroscopic mass and energy balances relate volume-averaged thermodynamic properties in the rotor passage control volume to the mass, momentum, and energy fluxes at the ports. Loss models account for entropy production in boundary layers and in separating flows caused by blade-blockage, incidence, and gradual opening and closing of rotor passages. The mathematical model provides a basis for predicting design-point wave rotor performance, port timing, and machine size. Model predictions are evaluated through comparisons with CFD calculations and three-port wave rotor experimental data. A four-port wave rotor design example is provided to demonstrate model applicability. The modeling approach is amenable to wave rotor optimization studies and rapid assessment of the trade-offs associated with integrating wave rotors into gas turbine engine systems.

  20. A model for the influences of soluble and insoluble solids, and treated volume on the ultraviolet-C resistance of heat-stressed Salmonella enterica in simulated fruit juices.

    PubMed

    Estilo, Emil Emmanuel C; Gabriel, Alonzo A

    2018-02-01

    This study was conducted to determine the effects of intrinsic juice characteristics namely insoluble solids (IS, 0-3 %w/v), and soluble solids (SS, 0-70 °Brix), and extrinsic process parameter treated volume (250-1000 mL) on the UV-C inactivation rates of heat-stressed Salmonella enterica in simulated fruit juices (SFJs). A Rotatable Central Composite Design of Experiment (CCRD) was used to determine combinations of the test variables, while Response Surface Methodology (RSM) was used to characterize and quantify the influences of the test variables on microbial inactivation. The heat-stressed cells exhibited log-linear UV-C inactivation behavior (R 2 0.952 to 0.999) in all CCRD combinations with D UV-C values ranging from 10.0 to 80.2 mJ/cm 2 . The D UV-C values obtained from the CCRD significantly fitted into a quadratic model (P < 0.0001). RSM results showed that individual linear (IS, SS, volume), individual quadratic (IS 2 and volume 2 ), and factor interactions (IS × volume and SS × volume) were found to significantly influence UV-C inactivation. Validation of the model in SFJs with combinations not included in the CCRD showed that the predictions were within acceptable error margins. Copyright © 2017. Published by Elsevier Ltd.

  1. Structural mass irregularities and fiber volume influence on morphology and mechanical properties of unsaturated polyester resin in matrix composites

    PubMed Central

    Ahmed, Khalil; Nasir, Muhammad; Fatima, Nasreen; Khan, Khalid M.; Zahra, Durey N.

    2014-01-01

    This paper presents the comparative results of a current study on unsaturated polyester resin (UPR) matrix composites processed by filament winding method, with cotton spun yarn of different mass irregularities and two different volume fractions. Physical and mechanical properties were measured, namely ultimate stress, stiffness, elongation%. The mechanical properties of the composites increased significantly with the increase in the fiber volume fraction in agreement with the Counto model. Mass irregularities in the yarn structure were quantitatively measured and visualized by scanning electron microscopy (SEM). Mass irregularities cause marked decrease in relative strength about 25% and 33% which increases with fiber volume fraction. Ultimate stress and stiffness increases with fiber volume fraction and is always higher for yarn with less mass irregularities. PMID:26644920

  2. Atmospheric and Space Sciences: Ionospheres and Plasma Environments

    NASA Astrophysics Data System (ADS)

    Yiǧit, Erdal

    2018-01-01

    The SpringerBriefs on Atmospheric and Space Sciences in two volumes presents a concise and interdisciplinary introduction to the basic theory, observation & modeling of atmospheric and ionospheric coupling processes on Earth. The goal is to contribute toward bridging the gap between meteorology, aeronomy, and planetary science. In addition recent progress in several related research topics, such atmospheric wave coupling and variability, is discussed. Volume 1 will focus on the atmosphere, while Volume 2 will present the ionospheres and the plasma environments. Volume 2 is aimed primarily at (research) students and young researchers that would like to gain quick insight into the basics of space sciences and current research. In combination with the first volume, it also is a useful tool for professors who would like to develop a course in atmospheric and space physics.

  3. The Iterative Research Cycle: Process-Based Model Evaluation

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2014-12-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex physics based models that simulate a myriad of processes at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. In this talk I will give an overview of our latest research on process-based model calibration and evaluation. This approach, rooted in Bayesian theory, uses summary metrics of the calibration data rather than the data itself to help detect which component(s) of the model is (are) malfunctioning and in need of improvement. A few case studies involving hydrologic and geophysical models will be used to demonstrate the proposed methodology.

  4. Shrinking lung syndrome as a manifestation of pleuritis: a new model based on pulmonary physiological studies.

    PubMed

    Henderson, Lauren A; Loring, Stephen H; Gill, Ritu R; Liao, Katherine P; Ishizawar, Rumey; Kim, Susan; Perlmutter-Goldenson, Robin; Rothman, Deborah; Son, Mary Beth F; Stoll, Matthew L; Zemel, Lawrence S; Sandborg, Christy; Dellaripa, Paul F; Nigrovic, Peter A

    2013-03-01

    The pathophysiology of shrinking lung syndrome (SLS) is poorly understood. We sought to define the structural basis for this condition through the study of pulmonary mechanics in affected patients. Since 2007, most patients evaluated for SLS at our institutions have undergone standardized respiratory testing including esophageal manometry. We analyzed these studies to define the physiological abnormalities driving respiratory restriction. Chest computed tomography data were post-processed to quantify lung volume and parenchymal density. Six cases met criteria for SLS. All presented with dyspnea as well as pleurisy and/or transient pleural effusions. Chest imaging results were free of parenchymal disease and corrected diffusing capacities were normal. Total lung capacities were 39%-50% of predicted. Maximal inspiratory pressures were impaired at high lung volumes, but not low lung volumes, in 5 patients. Lung compliance was strikingly reduced in all patients, accompanied by increased parenchymal density. Patients with SLS exhibited symptomatic and/or radiographic pleuritis associated with 2 characteristic physiological abnormalities: (1) impaired respiratory force at high but not low lung volumes; and (2) markedly decreased pulmonary compliance in the absence of identifiable interstitial lung disease. These findings suggest a model in which pleural inflammation chronically impairs deep inspiration, for example through neural reflexes, leading to parenchymal reorganization that impairs lung compliance, a known complication of persistently low lung volumes. Together these processes could account for the association of SLS with pleuritis as well as the gradual symptomatic and functional progression that is a hallmark of this syndrome.

  5. Parallel Distributed Processing at 25: Further Explorations in the Microstructure of Cognition

    ERIC Educational Resources Information Center

    Rogers, Timothy T.; McClelland, James L.

    2014-01-01

    This paper introduces a special issue of "Cognitive Science" initiated on the 25th anniversary of the publication of "Parallel Distributed Processing" (PDP), a two-volume work that introduced the use of neural network models as vehicles for understanding cognition. The collection surveys the core commitments of the PDP…

  6. Toward a multifactorial model of Alzheimer disease

    PubMed Central

    Storandt, Martha; Head, Denise; Fagan, Anne M.; Holtzman, David M.; Morris, John C.

    2011-01-01

    Relations among antecedant biomarkers of AD were evaluated using causal modeling; although correlation cannot be equated to causation, causation does require correlation. Individuals aged 43 to 89 years (N = 220) enrolled as cognitively normal controls in longitudinal studies had clinical and psychometric assessment, structural magnetic resonance imaging (MRI), cerebrospinal fluid (CSF) biomarkers, and brain amyloid imaging via positron emission tomography with Pittsburgh Compound B (PIB) obtained within 1 year. CSF levels of Aβ42 and tau were minimally correlated, indicating they represent independent processes. Aβ42, tau, and their interaction explained 60% of the variance in PIB. Effects of APOE genotype and age on PIB were indirect, operating through CSF markers. Only spurious relations via their common relation with age were found between the biomarkers and regional brain volumes or cognition. Hence, at least two independent hypothesized processes, one reflected by CSF Aβ42 and one by CSF tau, contribute to the development of fibrillar amyloid plaques preclinically. The lack of correlation between these two processes and brain volume in the regions most often affected in AD suggests the operation of a third process related to brain atrophy. PMID:22261556

  7. Denaturation process of laccase in various media by refractive index measurements.

    PubMed

    Saoudi, O; Ghaouar, N; Ben Salah, S; Othman, T

    2017-09-01

    In this work, we are interested in the denaturation process of a laccase from Tramates versicolor via the determination of the refractive index, the refractive index increment and the specific volume in various media. The measurements were carried out using an Abbe refractometer. We have shown that the refractive index increment values obtained from the slope of the variation of the refractive index vs. Concentration are outside the range refractive index increments of proteins. To correct the results, we have followed the theoretical predictions based on the knowledge of the protein refractive index from its amino acids composition. The denaturation process was studied by calculating the specific volume variation where its determination was related to the Gladstone-Dale and the Lorentz-Lorentz models.

  8. Mathematical modeling of the integrated process of mercury bioremediation in the industrial bioreactor.

    PubMed

    Głuszcz, Paweł; Petera, Jerzy; Ledakowicz, Stanisław

    2011-03-01

    The mathematical model of the integrated process of mercury contaminated wastewater bioremediation in a fixed-bed industrial bioreactor is presented. An activated carbon packing in the bioreactor plays the role of an adsorbent for ionic mercury and at the same time of a carrier material for immobilization of mercury-reducing bacteria. The model includes three basic stages of the bioremediation process: mass transfer in the liquid phase, adsorption of mercury onto activated carbon and ionic mercury bioreduction to Hg(0) by immobilized microorganisms. Model calculations were verified using experimental data obtained during the process of industrial wastewater bioremediation in the bioreactor of 1 m³ volume. It was found that the presented model reflects the properties of the real system quite well. Numerical simulation of the bioremediation process confirmed the experimentally observed positive effect of the integration of ionic mercury adsorption and bioreduction in one apparatus.

  9. Cognitive correlates of white matter lesion load and brain atrophy

    PubMed Central

    Dong, Chuanhui; Nabizadeh, Nooshin; Caunca, Michelle; Cheung, Ying Kuen; Rundek, Tatjana; Elkind, Mitchell S.V.; DeCarli, Charles; Sacco, Ralph L.; Stern, Yaakov

    2015-01-01

    Objective: We investigated white matter lesion load and global and regional brain volumes in relation to domain-specific cognitive performance in the stroke-free Northern Manhattan Study (NOMAS) population. Methods: We quantified white matter hyperintensity volume (WMHV), total cerebral volume (TCV), and total lateral ventricular (TLV) volume, as well as hippocampal and cortical gray matter (GM) lobar volumes in a subgroup. We used general linear models to examine MRI markers in relation to domain-specific cognitive performance, adjusting for key covariates. Results: MRI and cognitive data were available for 1,163 participants (mean age 70 ± 9 years; 60% women; 66% Hispanic, 17% black, 15% white). Across the entire sample, those with greater WMHV had worse processing speed. Those with larger TLV volume did worse on episodic memory, processing speed, and semantic memory tasks, and TCV did not explain domain-specific variability in cognitive performance independent of other measures. Age was an effect modifier, and stratified analysis showed that TCV and WMHV explained variability in some domains above age 70. Smaller hippocampal volume was associated with worse performance across domains, even after adjusting for APOE ε4 and vascular risk factors, whereas smaller frontal lobe volumes were only associated with worse executive function. Conclusions: In this racially/ethnically diverse, community-based sample, white matter lesion load was inversely associated with cognitive performance, independent of brain atrophy. Lateral ventricular, hippocampal, and lobar GM volumes explained domain-specific variability in cognitive performance. PMID:26156514

  10. Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.

    1980-01-01

    Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.

  11. Sensitivity of quantitative groundwater recharge estimates to volumetric and distribution uncertainty in rainfall forcing products

    NASA Astrophysics Data System (ADS)

    Werner, Micha; Westerhoff, Rogier; Moore, Catherine

    2017-04-01

    Quantitative estimates of recharge due to precipitation excess are an important input to determining sustainable abstraction of groundwater resources, as well providing one of the boundary conditions required for numerical groundwater modelling. Simple water balance models are widely applied for calculating recharge. In these models, precipitation is partitioned between different processes and stores; including surface runoff and infiltration, storage in the unsaturated zone, evaporation, capillary processes, and recharge to groundwater. Clearly the estimation of recharge amounts will depend on the estimation of precipitation volumes, which may vary, depending on the source of precipitation data used. However, the partitioning between the different processes is in many cases governed by (variable) intensity thresholds. This means that the estimates of recharge will not only be sensitive to input parameters such as soil type, texture, land use, potential evaporation; but mainly to the precipitation volume and intensity distribution. In this paper we explore the sensitivity of recharge estimates due to difference in precipitation volumes and intensity distribution in the rainfall forcing over the Canterbury region in New Zealand. We compare recharge rates and volumes using a simple water balance model that is forced using rainfall and evaporation data from; the NIWA Virtual Climate Station Network (VCSN) data (which is considered as the reference dataset); the ERA-Interim/WATCH dataset at 0.25 degrees and 0.5 degrees resolution; the TRMM-3B42 dataset; the CHIRPS dataset; and the recently releases MSWEP dataset. Recharge rates are calculated at a daily time step over the 14 year period from the 2000 to 2013 for the full Canterbury region, as well as at eight selected points distributed over the region. Lysimeter data with observed estimates of recharge are available at four of these points, as well as recharge estimates from the NGRM model, an independent model constructed using the same base data and forced with the VCSN precipitation dataset. Results of the comparison of the rainfall products show that there are significant differences in precipitation volume between the forcing products; in the order of 20% at most points. Even more significant differences can be seen, however, in the distribution of precipitation. For the VCSN data wet days (defined as >0.1mm precipitation) occur on some 20-30% of days (depending on location). This is reasonably reflected in the TRMM and CHIRPS data, while for the re-analysis based products some 60%to 80% of days are wet, albeit at lower intensities. These differences are amplified in the recharge estimates. At most points, volumetric differences are in the order of 40-60%, though difference may range into several orders of magnitude. The frequency distributions of recharge also differ significantly, with recharge over 0.1 mm occurring on 4-6% of days for the VCNS, CHIRPS, and TRMM datasets, but up to the order of 12% of days for the re-analysis data. Comparison against the lysimeter data show estimates to be reasonable, in particular for the reference datasets. Surprisingly some estimates of the lower resolution re-analysis datasets are reasonable, though this does seem to be due to lower recharge being compensated by recharge occurring more frequently. These results underline the importance of correct representation of rainfall volumes, as well as of distribution, particularly when evaluating possible changes to for example changes in precipitation intensity and volume. This holds for precipitation data derived from satellite based and re-analysis products, but also for interpolated data from gauges, where the distribution of intensities is strongly influenced by the interpolation process.

  12. The Evaluation of Screening Process and Local Bureaucracy in Determining the Priority of Urban Roads Maintenance and Rehabilitation

    NASA Astrophysics Data System (ADS)

    Hendhratmoyo, Andri; Syafi'i; Pungky Pramesti, Florentina

    2017-11-01

    Due to the limited budget of urban roads maintenance and rehabilitation, its prioritizationis inevitable. Many models have been developed to solve these problems. That is the reason why the purpose of this study was to evaluate the screening process in the decision making of the urban roads maintenance and rehabilitation priority. The prioritization that have to be taken into account on the effect of important criteria are road condition, traffic volume, budget processing and land use. 30 stakeholders were asked to fill in the questionnaires. The object of this case study are 188 urban roads sections at Ponorogo in order to examine the priorities. The researchers collected the data from Surface Distress Index (SDI), traffic volume, budget processing and land use of these road sections. Based on analysis, the weights of the criteria were: road condition (W1) = 0,411; traffic volume (W2) = 0,122; budget processing (W3) = 0,363 and land use (W4) = 0,105. The result of this study by the comparison of the index values of the alternatives priorities, Nyi Ageng Serang Street, was revealed to have the highest priority over the other streets regarding of maintenance and rehabilitation activities.

  13. On the new metrics for IMRT QA verification.

    PubMed

    Garcia-Romero, Alejandro; Hernandez-Vitoria, Araceli; Millan-Cebrian, Esther; Alba-Escorihuela, Veronica; Serrano-Zabaleta, Sonia; Ortega-Pardina, Pablo

    2016-11-01

    The aim of this work is to search for new metrics that could give more reliable acceptance/rejection criteria on the IMRT verification process and to offer solutions to the discrepancies found among different conventional metrics. Therefore, besides conventional metrics, new ones are proposed and evaluated with new tools to find correlations among them. These new metrics are based on the processing of the dose-volume histogram information, evaluating the absorbed dose differences, the dose constraint fulfillment, or modified biomathematical treatment outcome models such as tumor control probability (TCP) and normal tissue complication probability (NTCP). An additional purpose is to establish whether the new metrics yield the same acceptance/rejection plan distribution as the conventional ones. Fifty eight treatment plans concerning several patient locations are analyzed. All of them were verified prior to the treatment, using conventional metrics, and retrospectively after the treatment with the new metrics. These new metrics include the definition of three continuous functions, based on dose-volume histograms resulting from measurements evaluated with a reconstructed dose system and also with a Monte Carlo redundant calculation. The 3D gamma function for every volume of interest is also calculated. The information is also processed to obtain ΔTCP or ΔNTCP for the considered volumes of interest. These biomathematical treatment outcome models have been modified to increase their sensitivity to dose changes. A robustness index from a radiobiological point of view is defined to classify plans in robustness against dose changes. Dose difference metrics can be condensed in a single parameter: the dose difference global function, with an optimal cutoff that can be determined from a receiver operating characteristics (ROC) analysis of the metric. It is not always possible to correlate differences in biomathematical treatment outcome models with dose difference metrics. This is due to the fact that the dose constraint is often far from the dose that has an actual impact on the radiobiological model, and therefore, biomathematical treatment outcome models are insensitive to big dose differences between the verification system and the treatment planning system. As an alternative, the use of modified radiobiological models which provides a better correlation is proposed. In any case, it is better to choose robust plans from a radiobiological point of view. The robustness index defined in this work is a good predictor of the plan rejection probability according to metrics derived from modified radiobiological models. The global 3D gamma-based metric calculated for each plan volume shows a good correlation with the dose difference metrics and presents a good performance in the acceptance/rejection process. Some discrepancies have been found in dose reconstruction depending on the algorithm employed. Significant and unavoidable discrepancies were found between the conventional metrics and the new ones. The dose difference global function and the 3D gamma for each plan volume are good classifiers regarding dose difference metrics. ROC analysis is useful to evaluate the predictive power of the new metrics. The correlation between biomathematical treatment outcome models and the dose difference-based metrics is enhanced by using modified TCP and NTCP functions that take into account the dose constraints for each plan. The robustness index is useful to evaluate if a plan is likely to be rejected. Conventional verification should be replaced by the new metrics, which are clinically more relevant.

  14. Comparison between cylindrical and prismatic lithium-ion cell costs using a process based cost model

    NASA Astrophysics Data System (ADS)

    Ciez, Rebecca E.; Whitacre, J. F.

    2017-02-01

    The relative size and age of the US electric vehicle market means that a few vehicles are able to drive market-wide trends in the battery chemistries and cell formats on the road today. Three lithium-ion chemistries account for nearly all of the storage capacity, and half of the cells are cylindrical. However, no specific model exists to examine the costs of manufacturing these cylindrical cells. Here we present a process-based cost model tailored to the cylindrical lithium-ion cells currently used in the EV market. We examine the costs for varied cell dimensions, electrode thicknesses, chemistries, and production volumes. Although cost savings are possible from increasing cell dimensions and electrode thicknesses, economies of scale have already been reached, and future cost reductions from increased production volumes are minimal. Prismatic cells, which are able to further capitalize on the cost reduction from larger formats, can offer further reductions than those possible for cylindrical cells.

  15. The integrated simulation and assessment of the impacts of process change in biotherapeutic antibody production.

    PubMed

    Chhatre, Sunil; Jones, Carl; Francis, Richard; O'Donovan, Kieran; Titchener-Hooker, Nigel; Newcombe, Anthony; Keshavarz-Moore, Eli

    2006-01-01

    Growing commercial pressures in the pharmaceutical industry are establishing a need for robust computer simulations of whole bioprocesses to allow rapid prediction of the effects of changes made to manufacturing operations. This paper presents an integrated process simulation that models the cGMP manufacture of the FDA-approved biotherapeutic CroFab, an IgG fragment used to treat rattlesnake envenomation (Protherics U.K. Limited, Blaenwaun, Ffostrasol, Llandysul, Wales, U.K.). Initially, the product is isolated from ovine serum by precipitation and centrifugation, before enzymatic digestion of the IgG to produce FAB and FC fragments. These are purified by ion exchange and affinity chromatography to remove the FC and non-specific FAB fragments from the final venom-specific FAB product. The model was constructed in a discrete event simulation environment and used to determine the potential impact of a series of changes to the process, such as increasing the step efficiencies or volumes of chromatographic matrices, upon product yields and process times. The study indicated that the overall FAB yield was particularly sensitive to changes in the digestive and affinity chromatographic step efficiencies, which have a predicted 30% greater impact on process FAB yield than do the precipitation or centrifugation stages. The study showed that increasing the volume of affinity matrix has a negligible impact upon total process time. Although results such as these would require experimental verification within the physical constraints of the process and the facility, the model predictions are still useful in allowing rapid "what-if" scenario analysis of the likely impacts of process changes within such an integrated production process.

  16. Fuel Cell Manufacturing Research and Development | Hydrogen and Fuel Cells

    Science.gov Websites

    methods to meet volume and cost targets for transportation and other applications. Fortunately, much can set Develop predictive models to help industry design better manufacturing processes and methods

  17. Morphological phenotyping of mouse hearts using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Cua, Michelle; Lin, Eric; Lee, Ling; Sheng, Xiaoye; Wong, Kevin S. K.; Tibbits, Glen F.; Beg, Mirza Faisal; Sarunic, Marinko V.

    2014-11-01

    Transgenic mouse models have been instrumental in the elucidation of the molecular mechanisms behind many genetically based cardiovascular diseases such as Marfan syndrome (MFS). However, the characterization of their cardiac morphology has been hampered by the small size of the mouse heart. In this report, we adapted optical coherence tomography (OCT) for imaging fixed adult mouse hearts, and applied tools from computational anatomy to perform morphometric analyses. The hearts were first optically cleared and imaged from multiple perspectives. The acquired volumes were then corrected for refractive distortions, and registered and stitched together to form a single, high-resolution OCT volume of the whole heart. From this volume, various structures such as the valves and myofibril bundles were visualized. The volumetric nature of our dataset also allowed parameters such as wall thickness, ventricular wall masses, and luminal volumes to be extracted. Finally, we applied the entire acquisition and processing pipeline in a preliminary study comparing the cardiac morphology of wild-type mice and a transgenic mouse model of MFS.

  18. Physics and Process Modeling (PPM) and Other Propulsion R and T. Volume 1; Materials Processing, Characterization, and Modeling; Lifting Models

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This CP contains the extended abstracts and presentation figures of 36 papers presented at the PPM and Other Propulsion R&T Conference. The focus of the research described in these presentations is on materials and structures technologies that are parts of the various projects within the NASA Aeronautics Propulsion Systems Research and Technology Base Program. These projects include Physics and Process Modeling; Smart, Green Engine; Fast, Quiet Engine; High Temperature Engine Materials Program; and Hybrid Hyperspeed Propulsion. Also presented were research results from the Rotorcraft Systems Program and work supported by the NASA Lewis Director's Discretionary Fund. Authors from NASA Lewis Research Center, industry, and universities conducted research in the following areas: material processing, material characterization, modeling, life, applied life models, design techniques, vibration control, mechanical components, and tribology. Key issues, research accomplishments, and future directions are summarized in this publication.

  19. Multiscale Modeling of Angiogenesis and Predictive Capacity

    NASA Astrophysics Data System (ADS)

    Pillay, Samara; Byrne, Helen; Maini, Philip

    Tumors induce the growth of new blood vessels from existing vasculature through angiogenesis. Using an agent-based approach, we model the behavior of individual endothelial cells during angiogenesis. We incorporate crowding effects through volume exclusion, motility of cells through biased random walks, and include birth and death-like processes. We use the transition probabilities associated with the discrete model and a discrete conservation equation for cell occupancy to determine collective cell behavior, in terms of partial differential equations (PDEs). We derive three PDE models incorporating single, multi-species and no volume exclusion. By fitting the parameters in our PDE models and other well-established continuum models to agent-based simulations during a specific time period, and then comparing the outputs from the PDE models and agent-based model at later times, we aim to determine how well the PDE models predict the future behavior of the agent-based model. We also determine whether predictions differ across PDE models and the significance of those differences. This may impact drug development strategies based on PDE models.

  20. Atmospheric Ozone 1985. Assessment of our understanding of the processes controlling its present distribution and change, volume 3

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Topics addressed include: assessment models; model predictions of ozone changes; ozone and temperature trends; trace gas effects on climate; kinetics and photchemical data base; spectroscopic data base (infrared to microwave); instrument intercomparisons and assessments; and monthly mean distribution of ozone and temperature.

  1. Voluminator 2.0 - Speeding up the Approximation of the Volume of Defective 3d Building Models

    NASA Astrophysics Data System (ADS)

    Sindram, M.; Machl, T.; Steuer, H.; Pültz, M.; Kolbe, T. H.

    2016-06-01

    Semantic 3D city models are increasingly used as a data source in planning and analyzing processes of cities. They represent a virtual copy of the reality and are a common information base and source of information for examining urban questions. A significant advantage of virtual city models is that important indicators such as the volume of buildings, topological relationships between objects and other geometric as well as thematic information can be derived. Knowledge about the exact building volume is an essential base for estimating the building energy demand. In order to determine the volume of buildings with conventional algorithms and tools, the buildings may not contain any topological and geometrical errors. The reality, however, shows that city models very often contain errors such as missing surfaces, duplicated faces and misclosures. To overcome these errors (Steuer et al., 2015) have presented a robust method for approximating the volume of building models. For this purpose, a bounding box of the building is divided into a regular grid of voxels and it is determined which voxels are inside the building. The regular arrangement of the voxels leads to a high number of topological tests and prevents the application of this method using very high resolutions. In this paper we present an extension of the algorithm using an octree approach limiting the subdivision of space to regions around surfaces of the building models and to regions where, in the case of defective models, the topological tests are inconclusive. We show that the computation time can be significantly reduced, while preserving the robustness against geometrical and topological errors.

  2. Advanced collapsible tank for liquid containment

    NASA Technical Reports Server (NTRS)

    Flanagan, David T.; Hopkins, Robert C.

    1993-01-01

    Tanks for bulk liquid containment will be required to support advanced planetary exploration programs. Potential applications include storage of potable, process, and waste water, and fuels and process chemicals. The launch mass and volume penalties inherent in rigid tanks suggest that collapsible tanks may be more efficient. Collapsible tanks are made of lightweight flexible material and can be folded compactly for storage and transport. Although collapsible tanks for terrestrial use are widely available, a new design was developed that has significantly less mass and bulk than existing models. Modelled after the shape of a sessible drop, this design features a dual membrane with a nearly uniform stress distribution and a low surface-to-volume ratio. It can be adapted to store a variety of liquids in nearly any environment with constant acceleration field. Three models of 10L, 50L, and 378L capacity have been constructed and tested. The 378L (100 gallon) model weighed less than 10 percent of a commercially available collapsible tank of equivalent capacity, and required less than 20 percent of the storage space when folded for transport.

  3. Technical note: Comparison of methane ebullition modelling approaches used in terrestrial wetland models

    NASA Astrophysics Data System (ADS)

    Peltola, Olli; Raivonen, Maarit; Li, Xuefei; Vesala, Timo

    2018-02-01

    Emission via bubbling, i.e. ebullition, is one of the main methane (CH4) emission pathways from wetlands to the atmosphere. Direct measurement of gas bubble formation, growth and release in the peat-water matrix is challenging and in consequence these processes are relatively unknown and are coarsely represented in current wetland CH4 emission models. In this study we aimed to evaluate three ebullition modelling approaches and their effect on model performance. This was achieved by implementing the three approaches in one process-based CH4 emission model. All the approaches were based on some kind of threshold: either on CH4 pore water concentration (ECT), pressure (EPT) or free-phase gas volume (EBG) threshold. The model was run using 4 years of data from a boreal sedge fen and the results were compared with eddy covariance measurements of CH4 fluxes.

    Modelled annual CH4 emissions were largely unaffected by the different ebullition modelling approaches; however, temporal variability in CH4 emissions varied an order of magnitude between the approaches. Hence the ebullition modelling approach drives the temporal variability in modelled CH4 emissions and therefore significantly impacts, for instance, high-frequency (daily scale) model comparison and calibration against measurements. The modelling approach based on the most recent knowledge of the ebullition process (volume threshold, EBG) agreed the best with the measured fluxes (R2 = 0.63) and hence produced the most reasonable results, although there was a scale mismatch between the measurements (ecosystem scale with heterogeneous ebullition locations) and model results (single horizontally homogeneous peat column). The approach should be favoured over the two other more widely used ebullition modelling approaches and researchers are encouraged to implement it into their CH4 emission models.

  4. The Influence of Spatial Variation in Chromatin Density Determined by X-Ray Tomograms on the Time to Find DNA Binding Sites

    PubMed Central

    Larabell, Carolyn A.; Le Gros, Mark A.; McQueen, David M.; Peskin, Charles S.

    2014-01-01

    In this work, we examine how volume exclusion caused by regions of high chromatin density might influence the time required for proteins to find specific DNA binding sites. The spatial variation of chromatin density within mouse olfactory sensory neurons is determined from soft X-ray tomography reconstructions of five nuclei. We show that there is a division of the nuclear space into regions of low-density euchromatin and high-density heterochromatin. Volume exclusion experienced by a diffusing protein caused by this varying density of chromatin is modeled by a repulsive potential. The value of the potential at a given point in space is chosen to be proportional to the density of chromatin at that location. The constant of proportionality, called the volume exclusivity, provides a model parameter that determines the strength of volume exclusion. Numerical simulations demonstrate that the mean time for a protein to locate a binding site localized in euchromatin is minimized for a finite, nonzero volume exclusivity. For binding sites in heterochromatin, the mean time is minimized when the volume exclusivity is zero (the protein experiences no volume exclusion). An analytical theory is developed to explain these results. The theory suggests that for binding sites in euchromatin there is an optimal level of volume exclusivity that balances a reduction in the volume searched in finding the binding site, with the height of effective potential barriers the protein must cross during the search process. PMID:23955281

  5. Loopless nontrapping invasion-percolation model for fracking.

    PubMed

    Norris, J Quinn; Turcotte, Donald L; Rundle, John B

    2014-02-01

    Recent developments in hydraulic fracturing (fracking) have enabled the recovery of large quantities of natural gas and oil from old, low-permeability shales. These developments include a change from low-volume, high-viscosity fluid injection to high-volume, low-viscosity injection. The injected fluid introduces distributed damage that provides fracture permeability for the extraction of the gas and oil. In order to model this process, we utilize a loopless nontrapping invasion percolation previously introduced to model optimal polymers in a strongly disordered medium and for determining minimum energy spanning trees on a lattice. We performed numerical simulations on a two-dimensional square lattice and find significant differences from other percolation models. Additionally, we find that the growing fracture network satisfies both Horton-Strahler and Tokunaga network statistics. As with other invasion percolation models, our model displays burst dynamics, in which the cluster extends rapidly into a connected region. We introduce an alternative definition of bursts to be a consecutive series of opened bonds whose strengths are all below a specified value. Using this definition of bursts, we find good agreement with a power-law frequency-area distribution. These results are generally consistent with the observed distribution of microseismicity observed during a high-volume frack.

  6. Assessment of effectiveness of geologic isolation systems. Geologic-simulation model for a hypothetical site in the Columbia Plateau. Volume 2: results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foley, M.G.; Petrie, G.M.; Baldwin, A.J.

    1982-06-01

    This report contains the input data and computer results for the Geologic Simulation Model. This model is described in detail in the following report: Petrie, G.M., et. al. 1981. Geologic Simulation Model for a Hypothetical Site in the Columbia Plateau, Pacific Northwest Laboratory, Richland, Washington. The Geologic Simulation Model is a quasi-deterministic process-response model which simulates, for a million years into the future, the development of the geologic and hydrologic systems of the ground-water basin containing the Pasco Basin. Effects of natural processes on the ground-water hydrologic system are modeled principally by rate equations. The combined effects and synergistic interactionsmore » of different processes are approximated by linear superposition of their effects during discrete time intervals in a stepwise-integration approach.« less

  7. Regional grey matter volume abnormalities in bulimia nervosa and binge-eating disorder.

    PubMed

    Schäfer, Axel; Vaitl, Dieter; Schienle, Anne

    2010-04-01

    This study investigated whether bulimia nervosa (BN) and binge-eating disorder (BED) are associated with structural brain abnormalities. Both disorders share the main symptom binge-eating, but are considered differential diagnoses. We attempted to identify alterations in grey matter volume (GMV) that are present in both psychopathologies as well as disorder-specific GMV characteristics. Such information can help to improve neurobiological models of eating disorders and their classification. A total of 50 participants (patients suffering from BN (purge type), BED, and normal-weight controls) underwent structural MRI scanning. GMV for specific brain regions involved in food/reinforcement processing was analyzed by means of voxel-based morphometry. Both patient groups were characterized by greater volumes of the medial orbitofrontal cortex (OFC) compared to healthy controls. In BN patients, who had increased ventral striatum volumes, body mass index and purging severity were correlated with striatal grey matter volume. Altogether, our data implicate a crucial role of the medial OFC in the studied eating disorders. The structural abnormality might be associated with dysfunctions in food reward processing and/or self-regulation. The bulimia-specific volume enlargement of the ventral striatum is discussed in the framework of negative reinforcement through purging and associated weight regulation. Copyright 2009 Elsevier Inc. All rights reserved.

  8. Drizzle formation in stratocumulus clouds: Effects of turbulent mixing

    DOE PAGES

    Magaritz-Ronen, L.; Pinsky, M.; Khain, A.

    2016-02-17

    The mechanism of drizzle formation in shallow stratocumulus clouds and the effect of turbulent mixing on this process are investigated. A Lagrangian–Eularian model of the cloud-topped boundary layer is used to simulate the cloud measured during flight RF07 of the DYCOMS-II field experiment. The model contains ~ 2000 air parcels that are advected in a turbulence-like velocity field. In the model all microphysical processes are described for each Lagrangian air volume, and turbulent mixing between the parcels is also taken into account. It was found that the first large drops form in air volumes that are closest to adiabatic andmore » characterized by high humidity, extended residence near cloud top, and maximum values of liquid water content, allowing the formation of drops as a result of efficient collisions. The first large drops form near cloud top and initiate drizzle formation in the cloud. Drizzle is developed only when turbulent mixing of parcels is included in the model. Without mixing, the cloud structure is extremely inhomogeneous and the few large drops that do form in the cloud evaporate during their sedimentation. Lastly, it was found that turbulent mixing can delay the process of drizzle initiation but is essential for the further development of drizzle in the cloud.« less

  9. Drizzle formation in stratocumulus clouds: Effects of turbulent mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magaritz-Ronen, L.; Pinsky, M.; Khain, A.

    The mechanism of drizzle formation in shallow stratocumulus clouds and the effect of turbulent mixing on this process are investigated. A Lagrangian–Eularian model of the cloud-topped boundary layer is used to simulate the cloud measured during flight RF07 of the DYCOMS-II field experiment. The model contains ~ 2000 air parcels that are advected in a turbulence-like velocity field. In the model all microphysical processes are described for each Lagrangian air volume, and turbulent mixing between the parcels is also taken into account. It was found that the first large drops form in air volumes that are closest to adiabatic andmore » characterized by high humidity, extended residence near cloud top, and maximum values of liquid water content, allowing the formation of drops as a result of efficient collisions. The first large drops form near cloud top and initiate drizzle formation in the cloud. Drizzle is developed only when turbulent mixing of parcels is included in the model. Without mixing, the cloud structure is extremely inhomogeneous and the few large drops that do form in the cloud evaporate during their sedimentation. Lastly, it was found that turbulent mixing can delay the process of drizzle initiation but is essential for the further development of drizzle in the cloud.« less

  10. Incorporating pushing in exclusion-process models of cell migration.

    PubMed

    Yates, Christian A; Parker, Andrew; Baker, Ruth E

    2015-05-01

    The macroscale movement behavior of a wide range of isolated migrating cells has been well characterized experimentally. Recently, attention has turned to understanding the behavior of cells in crowded environments. In such scenarios it is possible for cells to interact, inducing neighboring cells to move in order to make room for their own movements or progeny. Although the behavior of interacting cells has been modeled extensively through volume-exclusion processes, few models, thus far, have explicitly accounted for the ability of cells to actively displace each other in order to create space for themselves. In this work we consider both on- and off-lattice volume-exclusion position-jump processes in which cells are explicitly allowed to induce movements in their near neighbors in order to create space for themselves to move or proliferate into. We refer to this behavior as pushing. From these simple individual-level representations we derive continuum partial differential equations for the average occupancy of the domain. We find that, for limited amounts of pushing, comparison between the averaged individual-level simulations and the population-level model is nearly as good as in the scenario without pushing. Interestingly, we find that, in the on-lattice case, the diffusion coefficient of the population-level model is increased by pushing, whereas, for the particular off-lattice model that we investigate, the diffusion coefficient is reduced. We conclude, therefore, that it is important to consider carefully the appropriate individual-level model to use when representing complex cell-cell interactions such as pushing.

  11. Examination and evaluation of the use of screen heaters for the measurement of the high temperature pyrolysis kinetics of polyethene and polypropene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westerhout, R.W.J.; Balk, R.H.P.; Meijer, R.

    1997-08-01

    A screen heater with a gas sweep was developed and applied to study the pyrolysis kinetics of low density polyethene (LDPE) and polypropene (PP) at temperatures ranging from 450 to 530 C. The aim of this study was to examine the applicability of screen heaters to measure these kinetics. On-line measurement of the rate of volatiles formation using a hydrocarbon analyzer was applied to enable the determination of the conversion rate over the entire conversion range on the basis of a single experiment. Another important feature of the screen heater used in this study is the possibility to measure pyrolysismore » kinetics under nearly isothermal conditions. The kinetic constants for LDPE and PP pyrolysis were determined, using a first order model to describe the conversion rate in the 70--90% conversion range and the random chain dissociation model for the entire conversion range. In addition to the experimental work two single particle models have been developed which both incorporate a mass and a (coupled) enthalpy balance, which were used to assess the influence of internal and external heat transfer processes on the pyrolysis process. The first model assumes a variable density and constant volume during the pyrolysis process, whereas the second model assumes a constant density and a variable volume. An important feature of these models is that they can accommodate kinetic models for which no analytical representation of the pyrolysis kinetics is available.« less

  12. The influence of buoyant forces and volume fraction of particles on the particle pushing/entrapment transition during directional solidification of Al/SiC and Al/graphite composites

    NASA Technical Reports Server (NTRS)

    Stefanescu, Doru M.; Moitra, Avijit; Kacar, A. Sedat; Dhindaw, Brij K.

    1990-01-01

    Directional solidification experiments in a Bridgman-type furnace were used to study particle behavior at the liquid/solid interface in aluminum metal matrix composites. Graphite or silicon-carbide particles were first dispersed in aluminum-base alloys via a mechanically stirred vortex. Then, 100-mm-diameter and 120-mm-long samples were cast in steel dies and used for directional solidification. The processing variables controlled were the direction and velocity of solidification and the temperature gradient at the interface. The material variables monitored were the interface energy, the liquid/particle density difference, the particle/liquid thermal conductivity ratio, and the volume fraction of particles. These properties were changed by selecting combinations of particles (graphite or silicon carbide) and alloys (Al-Cu, Al-Mg, Al-Ni). A model which consideres process thermodynamics, process kinetics (including the role of buoyant forces), and thermophysical properties was developed. Based on solidification direction and velocity, and on materials properties, four types of behavior were predicted. Sessile drop experiments were also used to determine some of the interface energies required in calculation with the proposed model. Experimental results compared favorably with model predictions.

  13. The influence of buoyant forces and volume fraction of particles on the particle pushing/entrapment transition during directional solidification of Al/SiC and Al/graphite composites

    NASA Astrophysics Data System (ADS)

    Stefanescu, Doru M.; Moitra, Avijit; Kacar, A. Sedat; Dhindaw, Brij K.

    1990-01-01

    Directional solidification experiments in a Bridgman-type furnace were used to study particle behavior at the liquid/solid interface in aluminum metal matrix composites. Graphite or siliconcarbide particles were first dispersed in aluminum-base alloys via a mechanically stirred vortex. Then, 100-mm-diameter and 120-mm-long samples were cast in steel dies and used for directional solidification. The processing variables controlled were the direction and velocity of solidification and the temperature gradient at the interface. The material variables monitored were the interface energy, the liquid/particle density difference, the particle/liquid thermal conductivity ratio, and the volume fraction of particles. These properties were changed by selecting combinations of particles (graphite or silicon carbide) and alloys (Al-Cu, Al-Mg, Al-Ni). A model which considers process thermodynamics, process kinetics (including the role of buoyant forces), and thermophysical properties was developed. Based on solidification direction and velocity, and on materials properties, four types of behavior were predicted. Sessile drop experiments were also used to determine some of the interface energies required in calculation with the proposed model. Experimental results compared favorably with model predictions.

  14. Discrete element modeling of the mass movement and loose material supplying the gully process of a debris avalanche in the Bayi Gully, Southwest China

    NASA Astrophysics Data System (ADS)

    Zhou, Jia-wen; Huang, Kang-xin; Shi, Chong; Hao, Ming-hui; Guo, Chao-xu

    2015-03-01

    The dynamic process of a debris avalanche in mountainous areas is influenced by the landslide volume, topographical conditions, mass-material composition, mechanical properties and other factors. A good understanding of the mass movement and loose material supplying the gully process is very important for understanding the dynamic properties of debris avalanches. Three-dimensional particle flow code (PFC3D) was used to simulate a debris avalanche in Quaternary deposits at the Bayi Gully, Southwest China. FORTRAN and AutoCAD were used for the secondary development to display the mass movement process and to quantitatively describe the mass movement and loose material supplying the gully process. The simulated results show that after the landslide is initiated, the gravitational potential energy is converted into kinetic energy with a variation velocity for the sliding masses. Two stages exist for the average-movement velocity: the acceleration stage and the slowdown stage, which are influenced by the topographical conditions. For the loose materials supplying the gully process, the cumulative volume of the sliding masses into the gully gradually increases over the time. When the landslide volume is not large enough, the increasing landslide volume does not obviously influence the movement process of the sliding masses. The travel distance and movement velocity increase with the decreasing numerical parameters, and the mass-movement process is finished more quickly using low-value parameters. The deposition area of the sliding masses decreases with the increasing numerical parameters and the corresponding deposition thickness increases. The mass movement of the debris avalanche is not only influenced by the mechanical parameters but is also controlled by the topographical conditions.

  15. Significant volume reduction of tank waste by selective crystallization: 1994 Annual report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herting, D.L.; Lunsford, T.R.

    1994-09-27

    The objective of this technology task plan is to develop and demonstrate a scaleable process of reclaim sodium nitrate (NaNO{sub 3}) from Hanford waste tanks as a clean nonradioactive salt. The purpose of the so-called Clean Salt Process is to reduce the volume of low level waste glass by as much as 70%. During the reporting period of October 1, 1993, through May 31, 1994, progress was made on four fronts -- laboratory studies, surrogate waste compositions, contracting for university research, and flowsheet development and modeling. In the laboratory, experiments with simulated waste were done to explore the effects ofmore » crystallization parameters on the size and crystal habit of product NaNO{sub 3} crystals. Data were obtained to allows prediction of decontamination factor as a function of solid/liquid separation parameters. Experiments with actual waste from tank 101-SY were done to determine the extent of contaminant occlusions in NaNO{sub 3} crystals. In preparation for defining surrogate waste compositions, single shell tanks were categorized according to the weight percent NaNO{sub 3} in each tank. A detailed process flowsheet and computer model were created using the ASPENPlus steady state process simulator. This is the same program being used by the Tank Waste Remediation System (TWRS) program for their waste pretreatment and disposal projections. Therefore, evaluations can be made of the effect of the Clean Salt Process on the low level waste volume and composition resulting from the TWRS baseline flowsheet. Calculations, using the same assumptions as used for the TWRS baseline where applicable indicate that the number of low level glass vaults would be reduced from 44 to 16 if the Clean Salt Process were incorporated into the baseline flowsheet.« less

  16. Experience-based quality control of clinical intensity-modulated radiotherapy planning.

    PubMed

    Moore, Kevin L; Brame, R Scott; Low, Daniel A; Mutic, Sasa

    2011-10-01

    To incorporate a quality control tool, according to previous planning experience and patient-specific anatomic information, into the intensity-modulated radiotherapy (IMRT) plan generation process and to determine whether the tool improved treatment plan quality. A retrospective study of 42 IMRT plans demonstrated a correlation between the fraction of organs at risk (OARs) overlapping the planning target volume and the mean dose. This yielded a model, predicted dose = prescription dose (0.2 + 0.8 [1 - exp(-3 overlapping planning target volume/volume of OAR)]), that predicted the achievable mean doses according to the planning target volume overlap/volume of OAR and the prescription dose. The model was incorporated into the planning process by way of a user-executable script that reported the predicted dose for any OAR. The script was introduced to clinicians engaged in IMRT planning and deployed thereafter. The script's effect was evaluated by tracking δ = (mean dose-predicted dose)/predicted dose, the fraction by which the mean dose exceeded the model. All OARs under investigation (rectum and bladder in prostate cancer; parotid glands, esophagus, and larynx in head-and-neck cancer) exhibited both smaller δ and reduced variability after script implementation. These effects were substantial for the parotid glands, for which the previous δ = 0.28 ± 0.24 was reduced to δ = 0.13 ± 0.10. The clinical relevance was most evident in the subset of cases in which the parotid glands were potentially salvageable (predicted dose <30 Gy). Before script implementation, an average of 30.1 Gy was delivered to the salvageable cases, with an average predicted dose of 20.3 Gy. After implementation, an average of 18.7 Gy was delivered to salvageable cases, with an average predicted dose of 17.2 Gy. In the prostate cases, the rectum model excess was reduced from δ = 0.28 ± 0.20 to δ = 0.07 ± 0.15. On surveying dosimetrists at the end of the study, most reported that the script both improved their IMRT planning (8 of 10) and increased their efficiency (6 of 10). This tool proved successful in increasing normal tissue sparing and reducing interclinician variability, providing effective quality control of the IMRT plan development process. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. A NASTRAN model of a large flexible swing-wing bomber. Volume 3: NASTRAN model development-wing structure

    NASA Technical Reports Server (NTRS)

    Mock, W. D.; Latham, R. A.

    1982-01-01

    The NASTRAN model plan for the wing structure was expanded in detail to generate the NASTRAN model for this substructure. The grid point coordinates were coded for each element. The material properties and sizing data for each element were specified. The wing substructure model was thoroughly checked out for continuity, connectivity, and constraints. This substructure was processed for structural influence coefficients (SIC) point loadings and the deflections were compared to those computed for the aircraft detail model. Finally, a demonstration and validation processing of this substructure was accomplished using the NASTRAN finite element program. The bulk data deck, stiffness matrices, and SIC output data were delivered.

  18. A NASTRAN model of a large flexible swing-wing bomber. Volume 2: NASTRAN model development-horizontal stabilzer, vertical stabilizer and nacelle structures

    NASA Technical Reports Server (NTRS)

    Mock, W. D.; Latham, R. A.; Tisher, E. D.

    1982-01-01

    The NASTRAN model plans for the horizontal stabilizer, vertical stabilizer, and nacelle structure were expanded in detail to generate the NASTRAN model for each of these substructures. The grid point coordinates were coded for each element. The material properties and sizing data for each element were specified. Each substructure model was thoroughly checked out for continuity, connectivity, and constraints. These substructures were processed for structural influence coefficients (SIC) point loadings and the deflections were compared to those computed for the aircraft detail models. Finally, a demonstration and validation processing of these substructures was accomplished using the NASTRAN finite element program installed at NASA/DFRC facility.

  19. A NASTRAN model of a large flexible swing-wing bomber. Volume 4: NASTRAN model development-fuselage structure

    NASA Technical Reports Server (NTRS)

    Mock, W. D.; Latham, R. A.

    1982-01-01

    The NASTRAN model plan for the fuselage structure was expanded in detail to generate the NASTRAN model for this substructure. The grid point coordinates were coded for each element. The material properties and sizing data for each element were specified. The fuselage substructure model was thoroughly checked out for continuity, connectivity, and constraints. This substructure was processed for structural influence coefficients (SIC) point loadings and the deflections were compared to those computed for the aircraft detail model. Finally, a demonstration and validation processing of this substructure was accomplished using the NASTRAN finite element program. The bulk data deck, stiffness matrices, and SIC output data were delivered.

  20. Defect-induced solid state amorphization of molecular crystals

    NASA Astrophysics Data System (ADS)

    Lei, Lei; Carvajal, Teresa; Koslowski, Marisol

    2012-04-01

    We investigate the process of mechanically induced amorphization in small molecule organic crystals under extensive deformation. In this work, we develop a model that describes the amorphization of molecular crystals, in which the plastic response is calculated with a phase field dislocation dynamics theory in four materials: acetaminophen, sucrose, γ-indomethacin, and aspirin. The model is able to predict the fraction of amorphous material generated in single crystals for a given applied stress. Our results show that γ-indomethacin and sucrose demonstrate large volume fractions of amorphous material after sufficient plastic deformation, while smaller amorphous volume fractions are predicted in acetaminophen and aspirin, in agreement with experimental observation.

  1. Calibration of the highway safety manual for Missouri.

    DOT National Transportation Integrated Search

    2013-12-01

    The new Highway Safety Manual (HSM) contains predictive models that need to be calibrated to local conditions. This : calibration process requires detailed data types, such as crash frequencies, traffic volumes, geometrics, and land-use. The : HSM do...

  2. A 63 K phase change unit integrating with pulse tube cryocoolers

    NASA Astrophysics Data System (ADS)

    Chunhui, Kong; Liubiao, Chen; Sixue, Liu; Yuan, Zhou; Junjie, Wang

    2017-02-01

    This article presents the design and computer model results of an integrated cooler system which consists of a single stage pulse tube cryocooler integrated with a small amount of a phase change material. A cryogenic thermal switch was used to thermally connect the phase change unit to the cold end of the cryocooler. During heat load operation, the cryogenic thermal switch is turned off to avoid vibrations. The phase change unit absorbs heat loads by melting a substance in a constant pressure-temperature-volume process. Once the substance has been melted, the cryogenic thermal turned on, the cryocooler can then refreeze the material. Advantages of this type of cooler are no vibrations during sensor operations; the ability to absorb increased heat loads; potentially longer system lifetime; and a lower mass, volume and cost. A numerical model was constructed from derived thermodynamic relationships for the cooling/heating and freezing/melting processes.

  3. The cutting mechanism of the electrosurgical scalpel

    NASA Astrophysics Data System (ADS)

    Gjika, Eda; Pekker, Mikhail; Shashurin, Alexey; Shneider, Mikhail; Zhuang, Taisen; Canady, Jerome; Keidar, Michael

    2017-01-01

    Electrosurgical cutting is a well-known technique for creating incisions often used for the removal of benign and malignant tumors. The proposed mathematical model suggests that incisions are created due to the localized heating of the tissue. The model estimates a volume of tissue heating in the order of 2 · 10-4 mm3. This relatively small predicted volume explains why the heat generated from the very tip of the scalpel is unable to cause extensive damage to the tissue adjacent to the incision site. The scalpel exposes the target region to an RF field in 60 ms pulses until a temperature of around 100 °C is reached. This process leads to desiccation where the tissue is characterized by a significantly low electrical conductivity, which prevents further heating and charring. Subsequently, the incision is created from the mechanical scraping process that follows.

  4. Volume Averaging Study of the Capacitive Deionization Process in Homogeneous Porous Media

    DOE PAGES

    Gabitto, Jorge; Tsouris, Costas

    2015-05-05

    Ion storage in porous electrodes is important in applications such as energy storage by supercapacitors, water purification by capacitive deionization, extraction of energy from a salinity difference and heavy ion purification. In this paper, a model is presented to simulate the charge process in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. Transport between the electrolyte solution and the chargedmore » wall is described using the Gouy–Chapman–Stern model. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. Finally, the source terms that appear in the average equations are calculated using numerical computations. An alternative way to deal with the source terms is proposed.« less

  5. Predictive modeling of multicellular structure formation by using Cellular Particle Dynamics simulations

    NASA Astrophysics Data System (ADS)

    McCune, Matthew; Shafiee, Ashkan; Forgacs, Gabor; Kosztin, Ioan

    2014-03-01

    Cellular Particle Dynamics (CPD) is an effective computational method for describing and predicting the time evolution of biomechanical relaxation processes of multicellular systems. A typical example is the fusion of spheroidal bioink particles during post bioprinting structure formation. In CPD cells are modeled as an ensemble of cellular particles (CPs) that interact via short-range contact interactions, characterized by an attractive (adhesive interaction) and a repulsive (excluded volume interaction) component. The time evolution of the spatial conformation of the multicellular system is determined by following the trajectories of all CPs through integration of their equations of motion. CPD was successfully applied to describe and predict the fusion of 3D tissue construct involving identical spherical aggregates. Here, we demonstrate that CPD can also predict tissue formation involving uneven spherical aggregates whose volumes decrease during the fusion process. Work supported by NSF [PHY-0957914]. Computer time provided by the University of Missouri Bioinformatics Consortium.

  6. Northeast Artificial Intelligence Consortium (NAIC). Volume 15. Strategies for Coupling Symbolic and Numerical Computation in Knowledge Base Systems

    DTIC Science & Technology

    1990-12-01

    Implementation of Coupled System 18 15.4. CASE STUDIES & IMPLEMENTATION EXAMPLES 24 15.4.1. The Case Studies of Coupled System 24 15.4.2. Example: Coupled System...occurs during specific phases of the problem-solving process. By decomposing the coupling process into its component layers we effectively study the nature...by the qualitative model, appropriate mathematical model is invoked. 5) The results are verified. If successful, stop. Else go to (2) and use an

  7. Advanced GaAs Process Modeling. Volume 1

    DTIC Science & Technology

    1989-05-01

    COSATI CODES 18 . SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP Gallium Arsenide, MESFET, Process...Background 9 3.2 Model Calculations 10 3.3 Conclusions 17 IV. ION-IMPLANTATION INTO GaAs PROFILE DETERMINATION 18 4.1 Ion Implantation Profile...Determination in GaAs 18 4.1.1. Background 18 4.1.2. Experimental Measurements 20 4.1.3. Results 22 4.1.3.1 Ion-Energy Dependence 22 4.1.3.2. Tilt and Rotation

  8. Knowledge Elicitation: Phase 1 Final Report. Volume 1

    DTIC Science & Technology

    1989-06-01

    34 i.e., superficial features such as type of apparatus, while experts rely on basic principles of physics (e.g., conservation of energy ) and generic...process. This last part of the model would typically consist of descriptions of the impact of the process on one or more of the objects. Figure 3-4...goals. The elicitor is probing for an underlying mental model. 9. Expert: To kill him before he can take any action that would impact on our forces. 10

  9. Specification/Verification of Temporal Properties for Distributed Systems: Issues and Approaches. Volume 1

    DTIC Science & Technology

    1990-02-01

    copies Pl ,...,P. of a multiple module fp resolve nondeterminism (local or global) in an identical manner. 5. The copies PI,...,P, axe physically...recovery block. A recovery block consists of a conventional block (like in ALGOL or PL /I) which is provided with a means of error detection, called an...improved failures model for communicating processes. In Proceeding. NSF- SERC Seminar on Concurrency, volume 197 of Lecture Notes in Computer Science

  10. Fuel Effects on Nozzle Flow and Spray Using Fully Coupled Eulerian Simulations

    DTIC Science & Technology

    2015-09-01

    Density of liquid fuel, kg/m 3 = Density of ambient gas , kg/m 3 VOF = Volume of Fluid model = Volume of Fluid Scalar ROI = Rate of...have been reported arising from individual refinery processes, crude oil source, and also varying with season, year and age of the fuel. This myriad...configurations. Under reacting conditions, Violi et al. (6) presented a surrogate mixture of six pure hydrocarbon ( Utah surrogate) and found that it

  11. Digital image processing for the earth resources technology satellite data.

    NASA Technical Reports Server (NTRS)

    Will, P. M.; Bakis, R.; Wesley, M. A.

    1972-01-01

    This paper discusses the problems of digital processing of the large volumes of multispectral image data that are expected to be received from the ERTS program. Correction of geometric and radiometric distortions are discussed and a byte oriented implementation is proposed. CPU timing estimates are given for a System/360 Model 67, and show that a processing throughput of 1000 image sets per week is feasible.

  12. Computational modeling of the pressurization process in a NASP vehicle propellant tank experimental simulation

    NASA Technical Reports Server (NTRS)

    Sasmal, G. P.; Hochstein, J. I.; Wendl, M. C.; Hardy, T. L.

    1991-01-01

    A multidimensional computational model of the pressurization process in a slush hydrogen propellant storage tank was developed and its accuracy evaluated by comparison to experimental data measured for a 5 ft diameter spherical tank. The fluid mechanic, thermodynamic, and heat transfer processes within the ullage are represented by a finite-volume model. The model was shown to be in reasonable agreement with the experiment data. A parameter study was undertaken to examine the dependence of the pressurization process on initial ullage temperature distribution and pressurant mass flow rate. It is shown that for a given heat flux rate at the ullage boundary, the pressurization process is nearly independent of initial temperature distribution. Significant differences were identified between the ullage temperature and velocity fields predicted for pressurization of slush and those predicted for pressurization of liquid hydrogen. A simplified model of the pressurization process was constructed in search of a dimensionless characterization of the pressurization process. It is shown that the relationship derived from this simplified model collapses all of the pressure history data generated during this study into a single curve.

  13. Automated volumetric lung segmentation of thoracic CT images using fully convolutional neural network

    NASA Astrophysics Data System (ADS)

    Negahdar, Mohammadreza; Beymer, David; Syeda-Mahmood, Tanveer

    2018-02-01

    Deep Learning models such as Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in 2D medical image analysis. In clinical practice; however, most analyzed and acquired medical data are formed of 3D volumes. In this paper, we present a fast and efficient 3D lung segmentation method based on V-net: a purely volumetric fully CNN. Our model is trained on chest CT images through volume to volume learning, which palliates overfitting problem on limited number of annotated training data. Adopting a pre-processing step and training an objective function based on Dice coefficient addresses the imbalance between the number of lung voxels against that of background. We have leveraged Vnet model by using batch normalization for training which enables us to use higher learning rate and accelerates the training of the model. To address the inadequacy of training data and obtain better robustness, we augment the data applying random linear and non-linear transformations. Experimental results on two challenging medical image data show that our proposed method achieved competitive result with a much faster speed.

  14. A Volume Flux Approach to Cryolava Dome Emplacement on Europa

    NASA Technical Reports Server (NTRS)

    Quick, Lynnae C.; Fagents, Sarah A.; Hurford, Terry A.; Prockter, Louise M.

    2017-01-01

    We previously modeled a subset of domes on Europa with morphologies consistent with emplacement by viscous extrusions of cryolava. These models assumed instantaneous emplacement of a fixed volume of fluid onto the surface, followed by relaxation to form domes. However, this approach only allowed for the investigation of late-stage eruptive processes far from the vent and provided little insight into how cryolavas arrived at the surface. Consideration of dome emplacement as cryolavas erupt at the surface is therefore pertinent. A volume flux approach, in which lava erupts from the vent at a constant rate, was successfully applied to the formation of steep-sided volcanic domes on Venus. These domes are believed to have formed in the same manner as candi-date cryolava domes on Europa. In order to gain a more complete understanding of the potential for the emplacement of Europa domes via extrusive volcanism, we have applied this new volume flux approach to the formation of putative cryovolcanic domes on Europa. Assuming as in that europan cryolavas are briny, aqueous solutions which may or may not contain some ice crystal fraction, we present the results of this modeling and explore theories for the formation of low-albedo moats that surround some domes.

  15. Numerical Modeling of Nanocellular Foams Using Classical Nucleation Theory and Influence Volume Approach

    NASA Astrophysics Data System (ADS)

    Khan, Irfan; Costeux, Stephane; Bunker, Shana; Moore, Jonathan; Kar, Kishore

    2012-11-01

    Nanocellular porous materials present unusual optical, dielectric, thermal and mechanical properties and are thus envisioned to find use in a variety of applications. Thermoplastic polymeric foams show considerable promise in achieving these properties. However, there are still considerable challenges in achieving nanocellular foams with densities as low as conventional foams. Lack of in-depth understanding of the effect of process parameters and physical properties on the foaming process is a major obstacle. A numerical model has been developed to simulate the simultaneous nucleation and bubble growth during depressurization of thermoplastic polymers saturated with supercritical blowing agents. The model is based on the popular ``Influence Volume Approach,'' which assumes a growing boundary layer with depleted blowing agent surrounds each bubble. Classical nucleation theory is used to predict the rate of nucleation of bubbles. By solving the mass balance, momentum balance and species conservation equations for each bubble, the model is capable of predicting average bubble size, bubble size distribution and bulk porosity. The model is modified to include mechanisms for Joule-Thompson cooling during depressurization and secondary foaming. Simulation results for polymer with and without nucleating agents will be discussed and compared with experimental data.

  16. Hippocampal volume changes following electroconvulsive therapy: a systematic review and meta-analysis.

    PubMed

    Wilkinson, Samuel T; Sanacora, Gerard; Bloch, Michael H

    2017-05-01

    Reduced hippocampal volume is one of the most consistent morphological findings in Major Depressive Disorder (MDD). Electroconvulsive therapy (ECT) is the most effective therapy for MDD, yet its mechanism of action remains poorly understood. Animal models show that ECT induces several neuroplastic processes, which lead to hippocampal volume increases. We conducted a meta-analysis of ECT studies in humans to investigate its effects on hippocampal volume. PubMed was searched for studies examining hippocampal volume before and after ECT. A random-effects model was used for meta-analysis with standardized mean difference (SMD) of the change in hippocampal volume before and after ECT as the primary outcome. Nine studies involving 174 participants were included. Total hippocampal volumes increased significantly following ECT compared to pre-treatment values (SMD=1.10; 95% CI 0.80-1.39; z=7.34; p<0.001; k=9). Both right (SMD=1.01; 95% CI 0.72-1.30; z=6.76; p<0.001; k=7) and left (SMD=0.87; 95% CI 0.51-1.23; z=4.69; p<0.001; k=7) hippocampal volumes were also similarly increased significantly following ECT. We demonstrated no correlation between improvement in depression symptoms with ECT and change in total hippocampal volume (beta=-1.28, 95% CI -4.51-1.95, z=-0.78, p=0.44). We demonstrate fairly consistent increases in hippocampal volume bilaterally following ECT treatment. The relationship among these volumetric changes and clinical improvement and cognitive side effects of ECT should be explored by larger, multisite studies with harmonized imaging methods.

  17. The exit-time problem for a Markov jump process

    NASA Astrophysics Data System (ADS)

    Burch, N.; D'Elia, M.; Lehoucq, R. B.

    2014-12-01

    The purpose of this paper is to consider the exit-time problem for a finite-range Markov jump process, i.e, the distance the particle can jump is bounded independent of its location. Such jump diffusions are expedient models for anomalous transport exhibiting super-diffusion or nonstandard normal diffusion. We refer to the associated deterministic equation as a volume-constrained nonlocal diffusion equation. The volume constraint is the nonlocal analogue of a boundary condition necessary to demonstrate that the nonlocal diffusion equation is well-posed and is consistent with the jump process. A critical aspect of the analysis is a variational formulation and a recently developed nonlocal vector calculus. This calculus allows us to pose nonlocal backward and forward Kolmogorov equations, the former equation granting the various moments of the exit-time distribution.

  18. Early experiences of planning stereotactic radiosurgery using 3D printed models of eyes with uveal melanomas

    PubMed Central

    Furdová, Alena; Sramka, Miron; Thurzo, Andrej; Furdová, Adriana

    2017-01-01

    Objective The objective of this study was to determine the use of 3D printed model of an eye with intraocular tumor for linear accelerator-based stereotactic radiosurgery. Methods The software for segmentation (3D Slicer) created virtual 3D model of eye globe with tumorous mass based on tissue density from computed tomography and magnetic resonance imaging data. A virtual model was then processed in the slicing software (Simplify3D®) and printed on 3D printer using fused deposition modeling technology. The material that was used for printing was polylactic acid. Results In 2015, stereotactic planning scheme was optimized with the help of 3D printed model of the patient’s eye with intraocular tumor. In the period 2001–2015, a group of 150 patients with uveal melanoma (139 choroidal melanoma and 11 ciliary body melanoma) were treated. The median tumor volume was 0.5 cm3 (0.2–1.6 cm3). The radiation dose was 35.0 Gy by 99% of dose volume histogram. Conclusion The 3D printed model of eye with tumor was helpful in planning the process to achieve the optimal scheme for irradiation which requires high accuracy of defining the targeted tumor mass and critical structures. PMID:28203052

  19. Integration of the Total Lightning Jump Algorithm into Current Operational Warning Environment Conceptual Models

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Carey, Lawerence D.; Schultz, Elise V.; Stano, Geoffery T.; Kozlowski, Danielle M.; Goodman, Steven

    2012-01-01

    Key points that this analysis will begin to address are: 1)What physically is going on in the cloud when there is a jump in lightning? - Updraft variations, ice fluxes. 2)How do these processes fit in with severe storm conceptual models? 3)What would this information provide an end user (i.e., the forecaster)? - Relate LJA to radar observations, like changes in reflectivity, MESH, VIL, etc. based multi-Doppler derived physical relationships 4) How do we best transistionthis algorithm into the warning decision process. The known relationship between lightning updraft strength/volume and precipitation ice mass production can be extended to the concept of the lightning jump. Examination of the first lightning jump times from 329 storms in Schultz et al. shows an increase in the mean reflectivity profile and mixed phase echo volume during the 10 minutes prior to the lightning jump. Limited dual-Doppler results show that the largest lightning jumps are well correlated in time with increases in updraft strength/volume and precipitation ice mass production; however, the smaller magnitude lightning jumps appear to have more subtle relationships to updraft and ice mass characteristics.

  20. A multilevel approach to modeling of porous bioceramics

    NASA Astrophysics Data System (ADS)

    Mikushina, Valentina A.; Sidorenko, Yury N.

    2015-10-01

    The paper is devoted to discussion of multiscale models of heterogeneous materials using principles. The specificity of approach considered is the using of geometrical model of composites representative volume, which must be generated with taking the materials reinforcement structure into account. In framework of such model may be considered different physical processes which have influence on the effective mechanical properties of composite, in particular, the process of damage accumulation. It is shown that such approach can be used to prediction the value of composite macroscopic ultimate strength. As an example discussed the particular problem of the study the mechanical properties of biocomposite representing porous ceramics matrix filled with cortical bones tissue.

  1. Numerical Modeling of Unsteady Thermofluid Dynamics in Cryogenic Systems

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok

    2003-01-01

    A finite volume based network analysis procedure has been applied to model unsteady flow without and with heat transfer. Liquid has been modeled as compressible fluid where the compressibility factor is computed from the equation of state for a real fluid. The modeling approach recognizes that the pressure oscillation is linked with the variation of the compressibility factor; therefore, the speed of sound does not explicitly appear in the governing equations. The numerical results of chilldown process also suggest that the flow and heat transfer are strongly coupled. This is evident by observing that the mass flow rate during 90-second chilldown process increases by factor of ten.

  2. The NASA-AMES Research Center Stratospheric Aerosol Model. 1. Physical Processes and Computational Analogs

    NASA Technical Reports Server (NTRS)

    Turco, R. P.; Hamill, P.; Toon, O. B.; Whitten, R. C.; Kiang, C. S.

    1979-01-01

    A time-dependent one-dimensional model of the stratospheric sulfate aerosol layer is presented. In constructing the model, a wide range of basic physical and chemical processes are incorporated in order to avoid predetermining or biasing the model predictions. The simulation, which extends from the surface to an altitude of 58 km, includes the troposphere as a source of gases and condensation nuclei and as a sink for aerosol droplets. The size distribution of aerosol particles is resolved into 25 categories with particle radii increasing geometrically from 0.01 to 2.56 microns such that particle volume doubles between categories.

  3. Dynamic stiffness of chemically and physically ageing rubber vibration isolators in the audible frequency range. Part 1: constitutive equations

    NASA Astrophysics Data System (ADS)

    Kari, Leif

    2017-09-01

    The constitutive equations of chemically and physically ageing rubber in the audible frequency range are modelled as a function of ageing temperature, ageing time, actual temperature, time and frequency. The constitutive equations are derived by assuming nearly incompressible material with elastic spherical response and viscoelastic deviatoric response, using Mittag-Leffler relaxation function of fractional derivative type, the main advantage being the minimum material parameters needed to successfully fit experimental data over a broad frequency range. The material is furthermore assumed essentially entropic and thermo-mechanically simple while using a modified William-Landel-Ferry shift function to take into account temperature dependence and physical ageing, with fractional free volume evolution modelled by a nonlinear, fractional differential equation with relaxation time identical to that of the stress response and related to the fractional free volume by Doolittle equation. Physical ageing is a reversible ageing process, including trapping and freeing of polymer chain ends, polymer chain reorganizations and free volume changes. In contrast, chemical ageing is an irreversible process, mainly attributed to oxygen reaction with polymer network either damaging the network by scission or reformation of new polymer links. The chemical ageing is modelled by inner variables that are determined by inner fractional evolution equations. Finally, the model parameters are fitted to measurements results of natural rubber over a broad audible frequency range, and various parameter studies are performed including comparison with results obtained by ordinary, non-fractional ageing evolution differential equations.

  4. Reproducible MRI Measurement of Adipose Tissue Volumes in Genetic and Dietary Rodent Obesity Models

    PubMed Central

    Johnson, David H.; Flask, Chris A.; Ernsberger, Paul R.; Wong, Wilbur C. K.; Wilson, David L.

    2010-01-01

    Purpose To develop ratio MRI [lipid/(lipid+water)] methods for assessing lipid depots and compare measurement variability to biological differences in lean controls (spontaneously hypertensive rats, SHRs), dietary obese (SHR-DO), and genetic/dietary obese (SHROBs) animals. Materials and Methods Images with and without CHESS water-suppression were processed using a semi-automatic method accounting for relaxometry, chemical shift, receive coil sensitivity, and partial volume. Results Partial volume correction improved results by 10–15%. Over six operators, volume variation was reduced to 1.9 ml from 30.6 ml for single-image-analysis with intensity inhomogeneity. For three acquisitions on the same animal, volume reproducibility was <1%. SHROBs had 6X visceral and 8X subcutaneous adipose tissue than SHRs. SHR-DOs had enlarged visceral depots (3X SHRs). SHROB had significantly more subcutaneous adipose tissue, indicating a strong genetic component to this fat depot. Liver ratios in SHR-DO and SHROB were higher than SHR, indicating elevated fat content. Among SHROBs, evidence suggested a phenotype SHROB* having elevated liver ratios and visceral adipose tissue volumes. Conclusion Effects of diet and genetics on obesity were significantly larger than variations due to image acquisition and analysis, indicating that these methods can be used to assess accumulation/depletion of lipid depots in animal models of obesity. PMID:18821617

  5. Comparison of measured and modelled negative hydrogen ion densities at the ECR-discharge HOMER

    NASA Astrophysics Data System (ADS)

    Rauner, D.; Kurutz, U.; Fantz, U.

    2015-04-01

    As the negative hydrogen ion density nH- is a key parameter for the investigation of negative ion sources, its diagnostic quantification is essential in source development and operation as well as for fundamental research. By utilizing the photodetachment process of negative ions, generally two different diagnostic methods can be applied: via laser photodetachment, the density of negative ions is measured locally, but only relatively to the electron density. To obtain absolute densities, the electron density has to be measured additionally, which induces further uncertainties. Via cavity ring-down spectroscopy (CRDS), the absolute density of H- is measured directly, however LOS-averaged over the plasma length. At the ECR-discharge HOMER, where H- is produced in the plasma volume, laser photodetachment is applied as the standard method to measure nH-. The additional application of CRDS provides the possibility to directly obtain absolute values of nH-, thereby successfully bench-marking the laser photodetachment system as both diagnostics are in good agreement. In the investigated pressure range from 0.3 to 3 Pa, the measured negative hydrogen ion density shows a maximum at 1 to 1.5 Pa and an approximately linear response to increasing input microwave powers from 200 up to 500 W. Additionally, the volume production of negative ions is 0-dimensionally modelled by balancing H- production and destruction processes. The modelled densities are adapted to the absolute measurements of nH- via CRDS, allowing to identify collisions of H- with hydrogen atoms (associative and non-associative detachment) to be the dominant loss process of H- in the plasma volume at HOMER. Furthermore, the characteristic peak of nH- observed at 1 to 1.5 Pa is identified to be caused by a comparable behaviour of the electron density with varying pressure, as ne determines the volume production rate via dissociative electron attachment to vibrationally excited hydrogen molecules.

  6. CrossTalk. The Journal of Defense Software Engineering. Volume 23, Number 6, Nov/Dec 2010

    DTIC Science & Technology

    2010-11-01

    Model of archi- tectural design. It guides developers to apply effort to their software architecture commensurate with the risks faced by...Driven Model is the promotion of risk to prominence. It is possible to apply the Risk-Driven Model to essentially any software development process...succeed without any planned architecture work, while many high-risk projects would fail without it . The Risk-Driven Model walks a middle path

  7. National forest timber supply and stumpage markets in the western United States.

    Treesearch

    Darius M. Adams; Richard W. Haynes

    1991-01-01

    This paper presents an aggregate regional model of the National Forest timber supply process and the interaction of National Forest and non-National Forest supply in determining regional stumpage prices and harvest volumes. Model simulations track actual behavior in the Douglas-fir regional stumpage market with reasonable accuracy; projections for the next two decades...

  8. Models and Procedures for Improving the Planning, Management, and Evaluation of Cooperative Education Programs. Final Report. Volume I.

    ERIC Educational Resources Information Center

    Blaschke, Charles L.; Steiger, JoAnn

    This report of a project to design a set of training guidelines for planning, managing, and evaluating cooperative education programs describes briefly the procedures used in developing the guidelines and model; discusses the various components of the planning, management, and evaluation process; and presents guidelines and criteria for designing…

  9. Computational modelling of large deformations in layered-silicate/PET nanocomposites near the glass transition

    NASA Astrophysics Data System (ADS)

    Figiel, Łukasz; Dunne, Fionn P. E.; Buckley, C. Paul

    2010-01-01

    Layered-silicate nanoparticles offer a cost-effective reinforcement for thermoplastics. Computational modelling has been employed to study large deformations in layered-silicate/poly(ethylene terephthalate) (PET) nanocomposites near the glass transition, as would be experienced during industrial forming processes such as thermoforming or injection stretch blow moulding. Non-linear numerical modelling was applied, to predict the macroscopic large deformation behaviour, with morphology evolution and deformation occurring at the microscopic level, using the representative volume element (RVE) approach. A physically based elasto-viscoplastic constitutive model, describing the behaviour of the PET matrix within the RVE, was numerically implemented into a finite element solver (ABAQUS) using an UMAT subroutine. The implementation was designed to be robust, for accommodating large rotations and stretches of the matrix local to, and between, the nanoparticles. The nanocomposite morphology was reconstructed at the RVE level using a Monte-Carlo-based algorithm that placed straight, high-aspect ratio particles according to the specified orientation and volume fraction, with the assumption of periodicity. Computational experiments using this methodology enabled prediction of the strain-stiffening behaviour of the nanocomposite, observed experimentally, as functions of strain, strain rate, temperature and particle volume fraction. These results revealed the probable origins of the enhanced strain stiffening observed: (a) evolution of the morphology (through particle re-orientation) and (b) early onset of stress-induced pre-crystallization (and hence lock-up of viscous flow), triggered by the presence of particles. The computational model enabled prediction of the effects of process parameters (strain rate, temperature) on evolution of the morphology, and hence on the end-use properties.

  10. Growth process and model simulation of three different classes of Schima superba in a natural subtropical forest in China

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Deng, Xiangwen; Ouyang, Shuai; Chen, Lijun; Chu, Yonghe

    2017-01-01

    Schima superba is an important fire-resistant, high-quality timber species in southern China. Growth in height, diameter at breast height (DBH), and volume of the three different classes (overtopped, average and dominant) of S. superba were examined in a natural subtropical forest. Four growth models (Richards, edited Weibull, Logistic and Gompertz) were selected to fit the growth of the three different classes of trees. The results showed that there was a fluctuation phenomenon in height and DBH current annual growth process of all three classes. Multiple intersections were found between current annual increment (CAI) and mean annual increment (MAI) curves of both height and DBH, but there was no intersection between volume CAI and MAI curves. All selected models could be used to fit the growth of the three classes of S. superba, with determinant coefficients above 0.9637. However, the edited Weibull model performed best with the highest R2 and the lowest root of mean square error (RMSE). S. superba is a fast-growing tree with a higher growth rate during youth. The height and DBH CAIs of overtopped, average and dominant trees reached growth peaks at ages 5-10, 10-15 and 15-20 years, respectively. According to model simulation, the volume CAIs of overtopped, average and dominant trees reached growth peaks at ages 17, 55 and 76 years, respectively. The biological rotation ages of the overtopped, average and dominant trees of S. superba were 29, 85 and 128 years, respectively.

  11. Effect of rheological parameters on curing rate during NBR injection molding

    NASA Astrophysics Data System (ADS)

    Kyas, Kamil; Stanek, Michal; Manas, David; Skrobak, Adam

    2013-04-01

    In this work, non-isothermal injection molding process for NBR rubber mixture considering Isayev-Deng curing kinetic model, generalized Newtonian model with Carreau-WLF viscosity was modeled by using finite element method in order to understand the effect of volume flow rate, index of non-Newtonian behavior and relaxation time on the temperature profile and curing rate. It was found that for specific geometry and processing conditions, increase in relaxation time or in the index of non-Newtonian behavior increases the curing rate due to viscous dissipation taking place at the flow domain walls.

  12. A new multicompartmental reaction-diffusion modeling method links transient membrane attachment of E. coli MinE to E-ring formation.

    PubMed

    Arjunan, Satya Nanda Vel; Tomita, Masaru

    2010-03-01

    Many important cellular processes are regulated by reaction-diffusion (RD) of molecules that takes place both in the cytoplasm and on the membrane. To model and analyze such multicompartmental processes, we developed a lattice-based Monte Carlo method, Spatiocyte that supports RD in volume and surface compartments at single molecule resolution. Stochasticity in RD and the excluded volume effect brought by intracellular molecular crowding, both of which can significantly affect RD and thus, cellular processes, are also supported. We verified the method by comparing simulation results of diffusion, irreversible and reversible reactions with the predicted analytical and best available numerical solutions. Moreover, to directly compare the localization patterns of molecules in fluorescence microscopy images with simulation, we devised a visualization method that mimics the microphotography process by showing the trajectory of simulated molecules averaged according to the camera exposure time. In the rod-shaped bacterium Escherichia coli, the division site is suppressed at the cell poles by periodic pole-to-pole oscillations of the Min proteins (MinC, MinD and MinE) arising from carefully orchestrated RD in both cytoplasm and membrane compartments. Using Spatiocyte we could model and reproduce the in vivo MinDE localization dynamics by accounting for the previously reported properties of MinE. Our results suggest that the MinE ring, which is essential in preventing polar septation, is largely composed of MinE that is transiently attached to the membrane independently after recruited by MinD. Overall, Spatiocyte allows simulation and visualization of complex spatial and reaction-diffusion mediated cellular processes in volumes and surfaces. As we showed, it can potentially provide mechanistic insights otherwise difficult to obtain experimentally. The online version of this article (doi:10.1007/s11693-009-9047-2) contains supplementary material, which is available to authorized users.

  13. Revised Condition Rating Survey Models to Reflect All Distresses : Volume 1

    DOT National Transportation Integrated Search

    2018-04-01

    Pavement condition assessment plays a key role in infrastructure programming and planning processes. Similar to other state agencies, the Illinois Department of Transportation (IDOT) has been using a system to evaluate the condition of pavements sinc...

  14. Volume I: fluidized-bed code documentation, for the period February 28, 1983-March 18, 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piperopoulou, H.; Finson, M.; Bloomfield, D.

    1983-03-01

    This documentation supersedes the previous documentation of the Fluidized-Bed Gasifier code. Volume I documents a simulation program of a Fluidized-Bed Gasifier (FBG), and Volume II documents a systems model of the FBG. The FBG simulation program is an updated version of the PSI/FLUBED code which is capable of modeling slugging beds and variable bed diameter. In its present form the code is set up to model a Westinghouse commercial scale gasifier. The fluidized bed gasifier model combines the classical bubbling bed description for the transport and mixing processes with PSI-generated models for coal chemistry. At the distributor plate, the bubblemore » composition is that of the inlet gas and the initial bubble size is set by the details of the distributor plate. Bubbles grow by coalescence as they rise. The bubble composition and temperature change with height due to transport to and from the cloud as well as homogeneous reactions within the bubble. The cloud composition also varies with height due to cloud/bubble exchange, cloud/emulsion, exchange, and heterogeneous coal char reactions. The emulsion phase is considered to be well mixed.« less

  15. Green roof hydrologic performance and modeling: a review.

    PubMed

    Li, Yanling; Babcock, Roger W

    2014-01-01

    Green roofs reduce runoff from impervious surfaces in urban development. This paper reviews the technical literature on green roof hydrology. Laboratory experiments and field measurements have shown that green roofs can reduce stormwater runoff volume by 30 to 86%, reduce peak flow rate by 22 to 93% and delay the peak flow by 0 to 30 min and thereby decrease pollution, flooding and erosion during precipitation events. However, the effectiveness can vary substantially due to design characteristics making performance predictions difficult. Evaluation of the most recently published study findings indicates that the major factors affecting green roof hydrology are precipitation volume, precipitation dynamics, antecedent conditions, growth medium, plant species, and roof slope. This paper also evaluates the computer models commonly used to simulate hydrologic processes for green roofs, including stormwater management model, soil water atmosphere and plant, SWMS-2D, HYDRUS, and other models that are shown to be effective for predicting precipitation response and economic benefits. The review findings indicate that green roofs are effective for reduction of runoff volume and peak flow, and delay of peak flow, however, no tool or model is available to predict expected performance for any given anticipated system based on design parameters that directly affect green roof hydrology.

  16. Evolution of material properties during free radical photopolymerization

    NASA Astrophysics Data System (ADS)

    Wu, Jiangtao; Zhao, Zeang; Hamel, Craig M.; Mu, Xiaoming; Kuang, Xiao; Guo, Zaoyang; Qi, H. Jerry

    2018-03-01

    Photopolymerization is a widely used polymerization method in many engineering applications such as coating, dental restoration, and 3D printing. It is a complex chemical and physical process, through which a liquid monomer solution is rapidly converted to a solid polymer. In the most common free-radical photopolymerization process, the photoinitiator in the solution is exposed to light and decomposes into active radicals, which attach to monomers to start the polymerization reaction. The activated monomers then attack Cdbnd C double bonds of unsaturated monomers, which leads to the growth of polymer chains. With increases in the polymer chain length and the average molecular weight, polymer chains start to connect and form a network structure, and the liquid polymer solution becomes a dense solid. During this process, the material properties of the cured polymer change dramatically. In this paper, experiments and theoretical modeling are used to investigate the free-radical photopolymerization reaction kinetics, material property evolution and mechanics during the photopolymerization process. The model employs the first order chemical reaction rate equations to calculate the variation of the species concentrations. The degree of monomer conversion is used as an internal variable that dictates the mechanical properties of the cured polymer at different curing states, including volume shrinkage, glass transition temperature, and nonlinear viscoelastic properties. To capture the nonlinear behavior of the cured polymer under low temperature and finite deformation, a multibranch nonlinear viscoelastic model is developed. A phase evolution model is used to describe the mechanics of the coupling between the crosslink network evolution and mechanical loading during the curing process. The comparison of the model and the experimental results indicates that the model can capture property changes during curing. The model is further applied to investigate the internal stress of a thick sample caused by volume shrinkage during photopolymerization. Changes in the conversion degree gradient and the internal stress during photopolymerization are determined using FEM simulation. The model can be extended to many photocuring processes, such as photopolymerization 3D printing, surface coating and automotive part curing processes.

  17. Mathematical model of bone drilling for virtual surgery system

    NASA Astrophysics Data System (ADS)

    Alaytsev, Innokentiy K.; Danilova, Tatyana V.; Manturov, Alexey O.; Mareev, Gleb O.; Mareev, Oleg V.

    2018-04-01

    The bone drilling is an essential part of surgeries in ENT and Dentistry. A proper training of drilling machine handling skills is impossible without proper modelling of the drilling process. Utilization of high precision methods like FEM is limited due to the requirement of 1000 Hz update rate for haptic feedback. The study presents a mathematical model of the drilling process that accounts the properties of materials, the geometry and the rotation rate of a burr to compute the removed material volume. The simplicity of the model allows for integrating it in the high-frequency haptic thread. The precision of the model is enough for a virtual surgery system targeted on the training of the basic surgery skills.

  18. Start-up and operating costs for artisan cheese companies.

    PubMed

    Bouma, Andrea; Durham, Catherine A; Meunier-Goddik, Lisbeth

    2014-01-01

    Lack of valid economic data for artisan cheese making is a serious impediment to developing a realistic business plan and obtaining financing. The objective of this study was to determine approximate start-up and operating costs for an artisan cheese company. In addition, values are provided for the required size of processing and aging facilities associated with specific production volumes. Following in-depth interviews with existing artisan cheese makers, an economic model was developed to predict costs based on input variables such as production volume, production frequency, cheese types, milk types and cost, labor expenses, and financing. Estimated values for start-up cost for processing and aging facility ranged from $267,248 to $623,874 for annual production volumes of 3,402 kg (7,500 lb) and 27,216 kg (60,000 lb), respectively. First-year production costs ranged from $65,245 to $620,094 for the above-mentioned production volumes. It is likely that high start-up and operating costs remain a significant entry barrier for artisan cheese entrepreneurs. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. Rapid Prototyping Integrated With Nondestructive Evaluation and Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Baaklini, George Y.

    2001-01-01

    Most reverse engineering approaches involve imaging or digitizing an object then creating a computerized reconstruction that can be integrated, in three dimensions, into a particular design environment. Rapid prototyping (RP) refers to the practical ability to build high-quality physical prototypes directly from computer aided design (CAD) files. Using rapid prototyping, full-scale models or patterns can be built using a variety of materials in a fraction of the time required by more traditional prototyping techniques (refs. 1 and 2). Many software packages have been developed and are being designed to tackle the reverse engineering and rapid prototyping issues just mentioned. For example, image processing and three-dimensional reconstruction visualization software such as Velocity2 (ref. 3) are being used to carry out the construction process of three-dimensional volume models and the subsequent generation of a stereolithography file that is suitable for CAD applications. Producing three-dimensional models of objects from computed tomography (CT) scans is becoming a valuable nondestructive evaluation methodology (ref. 4). Real components can be rendered and subjected to temperature and stress tests using structural engineering software codes. For this to be achieved, accurate high-resolution images have to be obtained via CT scans and then processed, converted into a traditional file format, and translated into finite element models. Prototyping a three-dimensional volume of a composite structure by reading in a series of two-dimensional images generated via CT and by using and integrating commercial software (e.g. Velocity2, MSC/PATRAN (ref. 5), and Hypermesh (ref. 6)) is being applied successfully at the NASA Glenn Research Center. The building process from structural modeling to the analysis level is outlined in reference 7. Subsequently, a stress analysis of a composite cooling panel under combined thermomechanical loading conditions was performed to validate this process.

  20. Production, pathways and budgets of melts in mid-ocean ridges: An enthalpy based thermo-mechanical model

    NASA Astrophysics Data System (ADS)

    Mandal, Nibir; Sarkar, Shamik; Baruah, Amiya; Dutta, Urmi

    2018-04-01

    Using an enthalpy based thermo-mechanical model we provide a theoretical evaluation of melt production beneath mid-ocean ridges (MORs), and demonstrate how the melts subsequently develop their pathways to sustain the major ridge processes. Our model employs a Darcy idealization of the two-phase (solid-melt) system, accounting enthalpy (ΔH) as a function of temperature dependent liquid fraction (ϕ). Random thermal perturbations imposed in this model set in local convection that drive melts to flow through porosity controlled pathways with a typical mushroom-like 3D structure. We present across- and along-MOR axis model profiles to show the mode of occurrence of melt-rich zones within mushy regions, connected to deeper sources by single or multiple feeders. The upwelling of melts experiences two synchronous processes: 1) solidification-accretion, and 2) eruption, retaining a large melt fraction in the framework of mantle dynamics. Using a bifurcation analysis we determine the threshold condition for melt eruption, and estimate the potential volumes of eruptible melts (∼3.7 × 106 m3/yr) and sub-crustal solidified masses (∼1-8.8 × 106 m3/yr) on an axis length of 500 km. The solidification process far dominates over the eruption process in the initial phase, but declines rapidly on a time scale (t) of 1 Myr. Consequently, the eruption rate takes over the solidification rate, but attains nearly a steady value as t > 1.5 Myr. We finally present a melt budget, where a maximum of ∼5% of the total upwelling melt volume is available for eruption, whereas ∼19% for deeper level solidification; the rest continue to participate in the sub-crustal processes.

  1. A flow-simulation model of the tidal Potomac River

    USGS Publications Warehouse

    Schaffranek, Raymond W.

    1987-01-01

    A one-dimensional model capable of simulating flow in a network of interconnected channels has been applied to the tidal Potomac River including its major tributaries and embayments between Washington, D.C., and Indian Head, Md. The model can be used to compute water-surface elevations and flow discharges at any of 66 predetermined locations or at any alternative river cross sections definable within the network of channels. In addition, the model can be used to provide tidal-interchange flow volumes and to evaluate tidal excursions and the flushing properties of the riverine system. Comparisons of model-computed results with measured watersurface elevations and discharges demonstrate the validity and accuracy of the model. Tidal-cycle flow volumes computed by the calibrated model have been verified to be within an accuracy of ? 10 percent. Quantitative characteristics of the hydrodynamics of the tidal river are identified and discussed. The comprehensive flow data provided by the model can be used to better understand the geochemical, biological, and other processes affecting the river's water quality.

  2. Radiotracer investigation in gold leaching tanks.

    PubMed

    Dagadu, C P K; Akaho, E H K; Danso, K A; Stegowski, Z; Furman, L

    2012-01-01

    Measurement and analysis of residence time distribution (RTD) is a classical method to investigate performance of chemical reactors. In the present investigation, the radioactive tracer technique was used to measure the RTD of aqueous phase in a series of gold leaching tanks at the Damang gold processing plant in Ghana. The objective of the investigation was to measure the effective volume of each tank and validate the design data after recent process intensification or revamping of the plant. I-131 was used as a radioactive tracer and was instantaneously injected into the feed stream of the first tank and monitored at the outlet of different tanks. Both sampling and online measurement methods were used to monitor the tracer concentration. The results of measurements indicated that both the methods provided identical RTD curves. The mean residence time (MRT) and effective volume of each tank was estimated. The tanks-in-series model with exchange between active and stagnant volume was used and found suitable to describe the flow structure of aqueous phase in the tanks. The estimated effective volume of the tanks and high degree of mixing in tanks could validate the design data and confirmed the expectation of the plant engineer after intensification of the process. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. High performance concentration method for viruses in drinking water.

    PubMed

    Kunze, Andreas; Pei, Lu; Elsässer, Dennis; Niessner, Reinhard; Seidel, Michael

    2015-09-15

    According to the risk assessment of the WHO, highly infectious pathogenic viruses like rotaviruses should not be present in large-volume drinking water samples of up to 90 m(3). On the other hand, quantification methods for viruses are only operable in small volumes, and presently no concentration procedure for processing such large volumes has been reported. Therefore, the aim of this study was to demonstrate a procedure for processing viruses in-line of a drinking water pipeline by ultrafiltration (UF) and consecutive further concentration by monolithic filtration (MF) and centrifugal ultrafiltration (CeUF) of viruses to a final 1-mL sample. For testing this concept, the model virus bacteriophage MS2 was spiked continuously in UF instrumentation. Tap water was processed in volumes between 32.4 m(3) (22 h) and 97.7 m(3) (72 h) continuously either in dead-end (DE) or cross-flow (CF) mode. Best results were found by DE-UF over 22 h. The concentration of MS2 was increased from 4.2×10(4) GU/mL (genomic units per milliliter) to 3.2×10(10) GU/mL and from 71 PFU/mL to 2×10(8) PFU/mL as determined by qRT-PCR and plaque assay, respectively. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Magmatic evolution of a Cordilleran flare-up and its role in the creation of silicic crust.

    PubMed

    Ward, Kevin M; Delph, Jonathan R; Zandt, George; Beck, Susan L; Ducea, Mihai N

    2017-08-22

    The role of magmatic processes as a significant mechanism for the generation of voluminous silicic crust and the development of Cordilleran plateaus remains a lingering question in part because of the inherent difficulty in quantifying plutonic volumes. Despite this difficulty, a growing body of independently measured plutonic-to-volcanic ratios suggests the volume of plutonic material in the crust related to Cordilleran magmatic systems is much larger than is previously expected. To better examine the role of crustal magmatic processes and its relationship to erupted material in Cordilleran systems, we present a continuous high-resolution crustal seismic velocity model for an ~800 km section of the active South American Cordillera (Puna Plateau). Although the plutonic-to-volcanic ratios we estimate vary along the length of the Puna Plateau, all ratios are larger than those previously reported (~30:1 compared to 5:1) implying that a significant volume of intermediate to silicic plutonic material is generated in the crust of the central South American Cordillera. Furthermore, as Cordilleran-type margins have been common since the onset of modern plate tectonics, our findings suggest that similar processes may have played a significant role in generating and/or modifying large volumes of continental crust, as observed in the continents today.

  5. Relaxation mechanisms in glassy dynamics: the Arrhenius and fragile regimes.

    PubMed

    Hentschel, H George E; Karmakar, Smarajit; Procaccia, Itamar; Zylberg, Jacques

    2012-06-01

    Generic glass formers exhibit at least two characteristic changes in their relaxation behavior, first to an Arrhenius-type relaxation at some characteristic temperature and then at a lower characteristic temperature to a super-Arrhenius (fragile) behavior. We address these transitions by studying the statistics of free energy barriers for different systems at different temperatures and space dimensions. We present a clear evidence for changes in the dynamical behavior at the transition to Arrhenius and then to a super-Arrhenius behavior. A simple model is presented, based on the idea of competition between single-particle and cooperative dynamics. We argue that Arrhenius behavior can take place as long as there is enough free volume for the completion of a simple T1 relaxation process. Once free volume is absent one needs a cooperative mechanism to "collect" enough free volume. We show that this model captures all the qualitative behavior observed in simulations throughout the considered temperature range.

  6. Three-Component Decomposition of Polarimetric SAR Data Integrating Eigen-Decomposition Results

    NASA Astrophysics Data System (ADS)

    Lu, Da; He, Zhihua; Zhang, Huan

    2018-01-01

    This paper presents a novel three-component scattering power decomposition of polarimetric SAR data. There are two problems in three-component decomposition method: volume scattering component overestimation in urban areas and artificially set parameter to be a fixed value. Though volume scattering component overestimation can be partly solved by deorientation process, volume scattering still dominants some oriented urban areas. The speckle-like decomposition results introduced by artificially setting value are not conducive to further image interpretation. This paper integrates the results of eigen-decomposition to solve the aforementioned problems. Two principal eigenvectors are used to substitute the surface scattering model and the double bounce scattering model. The decomposed scattering powers are obtained using a constrained linear least-squares method. The proposed method has been verified using an ESAR PolSAR image, and the results show that the proposed method has better performance in urban area.

  7. Mechanical properties and failure behavior of unidirectional porous ceramics

    NASA Astrophysics Data System (ADS)

    Seuba, Jordi; Deville, Sylvain; Guizard, Christian; Stevenson, Adam J.

    2016-04-01

    We show that the honeycomb out-of-plane model derived by Gibson and Ashby can be applied to describe the compressive behavior of unidirectional porous materials. Ice-templating allowed us to process samples with accurate control over pore volume, size, and morphology. These samples allowed us to evaluate the effect of this microstructural variations on the compressive strength in a porosity range of 45-80%. The maximum strength of 286 MPa was achieved in the least porous ice-templated sample (P(%) = 49.9), with the smallest pore size (3 μm). We found that the out-of-plane model only holds when buckling is the dominant failure mode, as should be expected. Furthermore, we controlled total pore volume by adjusting solids loading and sintering temperature. This strategy allows us to independently control macroporosity and densification of walls, and the compressive strength of ice-templated materials is exclusively dependent on total pore volume.

  8. Mechanical properties and failure behavior of unidirectional porous ceramics.

    PubMed

    Seuba, Jordi; Deville, Sylvain; Guizard, Christian; Stevenson, Adam J

    2016-04-14

    We show that the honeycomb out-of-plane model derived by Gibson and Ashby can be applied to describe the compressive behavior of unidirectional porous materials. Ice-templating allowed us to process samples with accurate control over pore volume, size, and morphology. These samples allowed us to evaluate the effect of this microstructural variations on the compressive strength in a porosity range of 45-80%. The maximum strength of 286 MPa was achieved in the least porous ice-templated sample (P(%) = 49.9), with the smallest pore size (3 μm). We found that the out-of-plane model only holds when buckling is the dominant failure mode, as should be expected. Furthermore, we controlled total pore volume by adjusting solids loading and sintering temperature. This strategy allows us to independently control macroporosity and densification of walls, and the compressive strength of ice-templated materials is exclusively dependent on total pore volume.

  9. Dynamic knowledge representation using agent-based modeling: ontology instantiation and verification of conceptual models.

    PubMed

    An, Gary

    2009-01-01

    The sheer volume of biomedical research threatens to overwhelm the capacity of individuals to effectively process this information. Adding to this challenge is the multiscale nature of both biological systems and the research community as a whole. Given this volume and rate of generation of biomedical information, the research community must develop methods for robust representation of knowledge in order for individuals, and the community as a whole, to "know what they know." Despite increasing emphasis on "data-driven" research, the fact remains that researchers guide their research using intuitively constructed conceptual models derived from knowledge extracted from publications, knowledge that is generally qualitatively expressed using natural language. Agent-based modeling (ABM) is a computational modeling method that is suited to translating the knowledge expressed in biomedical texts into dynamic representations of the conceptual models generated by researchers. The hierarchical object-class orientation of ABM maps well to biomedical ontological structures, facilitating the translation of ontologies into instantiated models. Furthermore, ABM is suited to producing the nonintuitive behaviors that often "break" conceptual models. Verification in this context is focused at determining the plausibility of a particular conceptual model, and qualitative knowledge representation is often sufficient for this goal. Thus, utilized in this fashion, ABM can provide a powerful adjunct to other computational methods within the research process, as well as providing a metamodeling framework to enhance the evolution of biomedical ontologies.

  10. Reflective Practice: The Scholarship of Teaching and Learning. The CEET Faculty Development Program on Teaching and Learning. Second Edition: 2009 College Portfolio. Volumes I-IV

    ERIC Educational Resources Information Center

    Scarborough, Jule Dee

    2009-01-01

    "2009 Portfolio: The Second Edition of the College of Engineering's Portfolio" presents the 2009 Faculty Development Program on Teaching & Learning (TL) new content, modified models, new process and procedures, especially the new Instructional Analysis and Design Process Map, new PowerPoint presentations, modified teaching and…

  11. Model for refining operations

    NASA Technical Reports Server (NTRS)

    Dunbar, D. N.; Tunnah, B. G.

    1979-01-01

    Program predicts production volumes of petroleum refinery products, with particular emphasis on aircraft-turbine fuel blends and their key properties. It calculates capital and operating costs for refinery and its margin of profitability. Program also includes provisions for processing of synthetic crude oils from oil shale and coal liquefaction processes and contains highly-detailed blending computations for alternative jet-fuel blends of varying endpoint specifications.

  12. Airglow during ionospheric modifications by the sura facility radiation. experimental results obtained in 2010

    NASA Astrophysics Data System (ADS)

    Grach, S. M.; Klimenko, V. V.; Shindin, A. V.; Nasyrov, I. A.; Sergeev, E. N.; A. Yashnov, V.; A. Pogorelko, N.

    2012-06-01

    We present the results of studying the structure and dynamics of the HF-heated volume above the Sura facility obtained in 2010 by measurements of ionospheric airglow in the red (λ = 630 nm) and green (λ = 557.7 nm) lines of atomic oxygen. Vertical sounding of the ionosphere (followed by modeling of the pump-wave propagation) and measurements of stimulated electromagnetic emission were used for additional diagnostics of ionospheric parameters and the processes occurring in the heated volume.

  13. Brain tissues volume measurements from 2D MRI using parametric approach

    NASA Astrophysics Data System (ADS)

    L'vov, A. A.; Toropova, O. A.; Litovka, Yu. V.

    2018-04-01

    The purpose of the paper is to propose a fully automated method of volume assessment of structures within human brain. Our statistical approach uses maximum interdependency principle for decision making process of measurements consistency and unequal observations. Detecting outliers performed using maximum normalized residual test. We propose a statistical model which utilizes knowledge of tissues distribution in human brain and applies partial data restoration for precision improvement. The approach proposes completed computationally efficient and independent from segmentation algorithm used in the application.

  14. Research on volume metrology method of large vertical energy storage tank based on internal electro-optical distance-ranging method

    NASA Astrophysics Data System (ADS)

    Hao, Huadong; Shi, Haolei; Yi, Pengju; Liu, Ying; Li, Cunjun; Li, Shuguang

    2018-01-01

    A Volume Metrology method based on Internal Electro-optical Distance-ranging method is established for large vertical energy storage tank. After analyzing the vertical tank volume calculation mathematical model, the key processing algorithms, such as gross error elimination, filtering, streamline, and radius calculation are studied for the point cloud data. The corresponding volume values are automatically calculated in the different liquids by calculating the cross-sectional area along the horizontal direction and integrating from vertical direction. To design the comparison system, a vertical tank which the nominal capacity is 20,000 m3 is selected as the research object, and there are shown that the method has good repeatability and reproducibility. Through using the conventional capacity measurement method as reference, the relative deviation of calculated volume is less than 0.1%, meeting the measurement requirements. And the feasibility and effectiveness are demonstrated.

  15. On the effect of hydrostatic pressure on the conformational stability of globular proteins.

    PubMed

    Graziano, Giuseppe

    2015-12-01

    The model developed for cold denaturation (Graziano, PCCP 2010, 12, 14245-14252) is extended to rationalize the dependence of protein conformational stability upon hydrostatic pressure, at room temperature. A pressure- volume work is associated with the process of cavity creation for the need to enlarge the liquid volume against hydrostatic pressure. This contribution destabilizes the native state that has a molecular volume slightly larger than the denatured state due to voids existing in the protein core. Therefore, there is a hydrostatic pressure value at which the pressure-volume contribution plus the conformational entropy loss of the polypeptide chain are able to overwhelm the stabilizing gain in translational entropy of water molecules, due to the decrease in water accessible surface area upon folding, causing denaturation. © 2015 Wiley Periodicals, Inc.

  16. Dawn: A Simulation Model for Evaluating Costs and Tradeoffs of Big Data Science Architectures

    NASA Astrophysics Data System (ADS)

    Cinquini, L.; Crichton, D. J.; Braverman, A. J.; Kyo, L.; Fuchs, T.; Turmon, M.

    2014-12-01

    In many scientific disciplines, scientists and data managers are bracing for an upcoming deluge of big data volumes, which will increase the size of current data archives by a factor of 10-100 times. For example, the next Climate Model Inter-comparison Project (CMIP6) will generate a global archive of model output of approximately 10-20 Peta-bytes, while the upcoming next generation of NASA decadal Earth Observing instruments are expected to collect tens of Giga-bytes/day. In radio-astronomy, the Square Kilometre Array (SKA) will collect data in the Exa-bytes/day range, of which (after reduction and processing) around 1.5 Exa-bytes/year will be stored. The effective and timely processing of these enormous data streams will require the design of new data reduction and processing algorithms, new system architectures, and new techniques for evaluating computation uncertainty. Yet at present no general software tool or framework exists that will allow system architects to model their expected data processing workflow, and determine the network, computational and storage resources needed to prepare their data for scientific analysis. In order to fill this gap, at NASA/JPL we have been developing a preliminary model named DAWN (Distributed Analytics, Workflows and Numerics) for simulating arbitrary complex workflows composed of any number of data processing and movement tasks. The model can be configured with a representation of the problem at hand (the data volumes, the processing algorithms, the available computing and network resources), and is able to evaluate tradeoffs between different possible workflows based on several estimators: overall elapsed time, separate computation and transfer times, resulting uncertainty, and others. So far, we have been applying DAWN to analyze architectural solutions for 4 different use cases from distinct science disciplines: climate science, astronomy, hydrology and a generic cloud computing use case. This talk will present preliminary results and discuss how DAWN can be evolved into a powerful tool for designing system architectures for data intensive science.

  17. Defining stem profile model for wood valuation of red pine in Ontario and Michigan with consideration of stand density influence on tree taper

    Treesearch

    W. T. Zakrzewski; M. Penner; D. W. MacFarlane

    2007-01-01

    As part of the Canada-United States Great Lakes Stem Profile Modelling Project, established to support the local timber production process and to enable cross-border comparisons of timber volumes, here we present results of fitting Zakrzewski's (1999) stem profile model for red pine (Pinus resinosa Ait.) growing in Michigan, United States, and...

  18. Two-Dimensional Mathematical Modeling of the Pack Carburizing Process

    NASA Astrophysics Data System (ADS)

    Sarkar, S.; Gupta, G. S.

    2008-10-01

    Pack carburization is the oldest method among the case-hardening treatments, and sufficient attempts have not been made to understand this process in terms of heat and mass transfer, effect of alloying elements, dimensions of the sample, etc. Thus, a two-dimensional mathematical model in cylindrical coordinate is developed for simulating the pack carburization process for chromium-bearing steel in this study. Heat and mass balance equations are solved simultaneously, where the surface temperature of the sample varies with time, but the carbon potential at the surface during the process remains constant. The fully implicit finite volume technique is used to solve the governing equations. Good agreement has been found between the predicted and published data. The effect of temperature, carburizing time, dimensions of the sample, etc. on the pack carburizing process shows some interesting results. It is found that the two-dimensional model gives better insight into understanding the carburizing process.

  19. Fluid transport in reaction induced fractures

    NASA Astrophysics Data System (ADS)

    Ulven, Ole Ivar; Sun, WaiChing; Malthe-Sørenssen, Anders

    2015-04-01

    The process of fracture formation due to a volume increasing chemical reaction has been studied in a variety of different settings, e.g. weathering of dolerites by Røyne et al. te{royne}, serpentinization and carbonation of peridotite by Rudge et al. te{rudge} and replacement reactions in silica-poor igneous rocks by Jamtveit et al. te{jamtveit}. It is generally assumed that fracture formation will increase the net permeability of the rock, and thus increase the reactant transport rate and subsequently the total rate of material conversion, as summarised by Kelemen et al. te{kelemen}. Ulven et al. te{ulven_1} have shown that for fluid-mediated processes the ratio between chemical reaction rate and fluid transport rate in bulk rock controls the fracture pattern formed, and Ulven et al. te{ulven_2} have shown that instantaneous fluid transport in fractures lead to a significant increase in the total rate of the volume expanding process. However, instantaneous fluid transport in fractures is clearly an overestimate, and achievable fluid transport rates in fractures have apparently not been studied in any detail. Fractures cutting through an entire domain might experience relatively fast advective reactant transport, whereas dead-end fractures will be limited to diffusion of reactants in the fluid, internal fluid mixing in the fracture or capillary flow into newly formed fractures. Understanding the feedback process between fracture formation and permeability changes is essential in assessing industrial scale CO2 sequestration in ultramafic rock, but little is seemingly known about how large the permeability change will be in reaction-induced fracturing. In this work, we study the feedback between fracture formation during volume expansion and fluid transport in different fracture settings. We combine a discrete element model (DEM) describing a volume expanding process and the related fracture formation with different models that describe the fluid transport in the fractures. This provides new information on how much reaction induced fracturing might accelerate a volume expanding process. Jamtveit, B, Putnis, C. V., and Malthe-Sørenssen, A., ``Reaction induced fracturing during replacement processes,'' Contrib. Mineral Petrol. 157, 2009, pp. 127 - 133. Kelemen, P., Matter, J., Streit, E. E., Rudge, J. F., Curry, W. B., and Blusztajn, J., ``Rates and Mechanisms of Mineral Carbonation in Peridotite: Natural Processes and Recipes for Enhanced, in situ CO2 Capture and Storage,'' Annu. Rev. Earth Planet. Sci. 2011. 39:545 - 76. Rudge, J. F., Kelemen, P. B., and Spiegelman, M., ``A simple model of reaction induced cracking applied to serpentinization and carbonation of peridotite,'' Earth Planet. Sc. Lett. 291, Issues 1-4, 2010, pp. 215 - 227. Røyne, A., Jamtveit, B., and Malthe-Sørenssen, A., ``Controls on rock weathering rates by reaction-induced hierarchial fracturing,'' Earth Planet. Sc. Lett. 275, 2008, pp. 364 - 369. Ulven, O. I., Storheim, H., Austrheim, H., and Malthe-Sørenssen, A. ``Fracture initiation during volume increasing reactions in rocks and applications for CO2 sequestration'', Earth Planet. Sc. Lett. 389C, 2014, pp. 132 - 142, doi:10.1016/j.epsl.2013.12.039. Ulven, O. I., Jamtveit, B., and Malthe-Sørenssen, A., ``Reaction-driven fracturing of porous rock'', J. Geophys. Res. Solid Earth 119, 2014, doi:10.1002/2014JB011102.

  20. Computational study of textured ferroelectric polycrystals: Dielectric and piezoelectric properties of template-matrix composites

    NASA Astrophysics Data System (ADS)

    Zhou, Jie E.; Yan, Yongke; Priya, Shashank; Wang, Yu U.

    2017-01-01

    Quantitative relationships between processing, microstructure, and properties in textured ferroelectric polycrystals and the underlying responsible mechanisms are investigated by phase field modeling and computer simulation. This study focuses on three important aspects of textured ferroelectric ceramics: (i) grain microstructure evolution during templated grain growth processing, (ii) crystallographic texture development as a function of volume fraction and seed size of the templates, and (iii) dielectric and piezoelectric properties of the obtained template-matrix composites of textured polycrystals. Findings on the third aspect are presented here, while an accompanying paper of this work reports findings on the first two aspects. In this paper, the competing effects of crystallographic texture and template seed volume fraction on the dielectric and piezoelectric properties of ferroelectric polycrystals are investigated. The phase field model of ferroelectric composites consisting of template seeds embedded in matrix grains is developed to simulate domain evolution, polarization-electric field (P-E), and strain-electric field (ɛ-E) hysteresis loops. The coercive field, remnant polarization, dielectric permittivity, piezoelectric coefficient, and dissipation factor are studied as a function of grain texture and template seed volume fraction. It is found that, while crystallographic texture significantly improves the polycrystal properties towards those of single crystals, a higher volume fraction of template seeds tends to decrease the electromechanical properties, thus canceling the advantage of ferroelectric polycrystals textured by templated grain growth processing. This competing detrimental effect is shown to arise from the composite effect, where the template phase possesses material properties inferior to the matrix phase, causing mechanical clamping and charge accumulation at inter-phase interfaces between matrix and template inclusions. The computational results are compared with complementary experiments, where good agreement is obtained.

  1. ArcGIS Framework for Scientific Data Analysis and Serving

    NASA Astrophysics Data System (ADS)

    Xu, H.; Ju, W.; Zhang, J.

    2015-12-01

    ArcGIS is a platform for managing, visualizing, analyzing, and serving geospatial data. Scientific data as part of the geospatial data features multiple dimensions (X, Y, time, and depth) and large volume. Multidimensional mosaic dataset (MDMD), a newly enhanced data model in ArcGIS, models the multidimensional gridded data (e.g. raster or image) as a hypercube and enables ArcGIS's capabilities to handle the large volume and near-real time scientific data. Built on top of geodatabase, the MDMD stores the dimension values and the variables (2D arrays) in a geodatabase table which allows accessing a slice or slices of the hypercube through a simple query and supports animating changes along time or vertical dimension using ArcGIS desktop or web clients. Through raster types, MDMD can manage not only netCDF, GRIB, and HDF formats but also many other formats or satellite data. It is scalable and can handle large data volume. The parallel geo-processing engine makes the data ingestion fast and easily. Raster function, definition of a raster processing algorithm, is a very important component in ArcGIS platform for on-demand raster processing and analysis. The scientific data analytics is achieved through the MDMD and raster function templates which perform on-demand scientific computation with variables ingested in the MDMD. For example, aggregating monthly average from daily data; computing total rainfall of a year; calculating heat index for forecasting data, and identifying fishing habitat zones etc. Addtionally, MDMD with the associated raster function templates can be served through ArcGIS server as image services which provide a framework for on-demand server side computation and analysis, and the published services can be accessed by multiple clients such as ArcMap, ArcGIS Online, JavaScript, REST, WCS, and WMS. This presentation will focus on the MDMD model and raster processing templates. In addtion, MODIS land cover, NDFD weather service, and HYCOM ocean model will be used to illustrate how ArcGIS platform and MDMD model can facilitate scientific data visualization and analytics and how the analysis results can be shared to more audience through ArcGIS Online and Portal.

  2. Progress Towards a Thermo-Mechanical Magma Chamber Forward Model for Eruption Cycles, Applied to the Columbia River Flood Basalts

    NASA Astrophysics Data System (ADS)

    Karlstrom, L.; Ozimek, C.

    2016-12-01

    Magma chamber modeling has advanced to the stage where it is now possible to develop self-consistent, predictive models that consider mechanical, thermal, and compositional magma time evolution through multiple eruptive cycles. We have developed such a thermo-mechanical-chemical model for a laterally extensive sill-like chamber beneath free surface, to understand physical controls on eruptive products through time at long-lived magmatic centers. This model predicts the relative importance of recharge, eruption, assimilation and fractional crystallization (REAFC, Lee et al., 2013) on evolving chemical composition as a function of mechanical magma chamber stability regimes. We solve for the time evolution of chamber pressure, temperature, gas volume fraction, volume, elemental concentration in the melt and crustal temperature field that accounts for moving boundary conditions associated with chamber inflation (and the possibility of coupled chambers at different depths). The density, volume fractions of melt and crystals, crustal assimilation and the changing viscosity and crustal properties of the wall rock are also tracked, along with joint solubility of water and CO2. The eventual goal is to develop an efficient forward model to invert for eruptive records at long-lived eruptive centers, where multiple types of data for eruptions are available. As a first step, we apply this model to a new compilation of eruptive data from the Columbia River Flood Basalts (CRFB), which erupted 210,000 km3 from feeder dikes in Washington, Oregon and Idaho between 16.9-6Ma. Data include volumes, timing and geochemical composition of eruptive units, along with seismic surveys and clinopyroxene geobarometry that constrain depth of storage through time. We are in the process of performing a suite of simulations varying model input parameters such as mantle melt rate, emplacement depth, wall rock compositions and rheology, and volatile content to explain volume, eruption timescales, and chemical trace aspects of CRFB eruptions. We are particularly interested in whether the large volume eruptions of the main phase Grande Ronde basalts were made possible due to the development of shallow crustal storage.

  3. A broad scope knowledge based model for optimization of VMAT in esophageal cancer: validation and assessment of plan quality among different treatment centers.

    PubMed

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Laksar, Sarbani; Tozzi, Angelo; Scorsetti, Marta; Cozzi, Luca

    2015-10-31

    To evaluate the performance of a broad scope model-based optimisation process for volumetric modulated arc therapy applied to esophageal cancer. A set of 70 previously treated patients in two different institutions, were selected to train a model for the prediction of dose-volume constraints. The model was built with a broad-scope purpose, aiming to be effective for different dose prescriptions and tumour localisations. It was validated on three groups of patients from the same institution and from another clinic not providing patients for the training phase. Comparison of the automated plans was done against reference cases given by the clinically accepted plans. Quantitative improvements (statistically significant for the majority of the analysed dose-volume parameters) were observed between the benchmark and the test plans. Of 624 dose-volume objectives assessed for plan evaluation, in 21 cases (3.3 %) the reference plans failed to respect the constraints while the model-based plans succeeded. Only in 3 cases (<0.5 %) the reference plans passed the criteria while the model-based failed. In 5.3 % of the cases both groups of plans failed and in the remaining cases both passed the tests. Plans were optimised using a broad scope knowledge-based model to determine the dose-volume constraints. The results showed dosimetric improvements when compared to the benchmark data. Particularly the plans optimised for patients from the third centre, not participating to the training, resulted in superior quality. The data suggests that the new engine is reliable and could encourage its application to clinical practice.

  4. MPCV Exercise Operational Volume Analysis

    NASA Technical Reports Server (NTRS)

    Godfrey, A.; Humphreys, B.; Funk, J.; Perusek, G.; Lewandowski, B. E.

    2017-01-01

    In order to minimize the loss of bone and muscle mass during spaceflight, the Multi-purpose Crew Vehicle (MPCV) will include an exercise device and enough free space within the cabin for astronauts to use the device effectively. The NASA Digital Astronaut Project (DAP) has been tasked with using computational modeling to aid in determining whether or not the available operational volume is sufficient for in-flight exercise.Motion capture data was acquired using a 12-camera Smart DX system (BTS Bioengineering, Brooklyn, NY), while exercisers performed 9 resistive exercises without volume restrictions in a 1g environment. Data were collected from two male subjects, one being in the 99th percentile of height and the other in the 50th percentile of height, using between 25 and 60 motion capture markers. Motion capture data was also recorded as a third subject, also near the 50th percentile in height, performed aerobic rowing during a parabolic flight. A motion capture system and algorithms developed previously and presented at last years HRP-IWS were utilized to collect and process the data from the parabolic flight [1]. These motions were applied to a scaled version of a biomechanical model within the biomechanical modeling software OpenSim [2], and the volume sweeps of the motions were visually assessed against an imported CAD model of the operational volume. Further numerical analysis was performed using Matlab (Mathworks, Natick, MA) and the OpenSim API. This analysis determined the location of every marker in space over the duration of the exercise motion, and the distance of each marker to the nearest surface of the volume. Containment of the exercise motions within the operational volume was determined on a per-exercise and per-subject basis. The orientation of the exerciser and the angle of the footplate were two important factors upon which containment was dependent. Regions where the exercise motion exceeds the bounds of the operational volume have been identified by determining which markers from the motion capture exceed the operational volume and by how much. A credibility assessment of this analysis was performed in accordance with NASA-STD-7009 prior to delivery to the MPCV program.

  5. Integrated modeling of second phase precipitation in cold-worked 316 stainless steels under irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamivand, Mahmood; Yang, Ying; Busby, Jeremy T.

    The current work combines the Cluster Dynamics (CD) technique and CALPHAD-based precipitation modeling to address the second phase precipitation in cold-worked (CW) 316 stainless steels (SS) under irradiation at 300–400 °C. CD provides the radiation enhanced diffusion and dislocation evolution as inputs for the precipitation model. The CALPHAD-based precipitation model treats the nucleation, growth and coarsening of precipitation processes based on classical nucleation theory and evolution equations, and simulates the composition, size and size distribution of precipitate phases. We benchmark the model against available experimental data at fast reactor conditions (9.4 × 10 –7 dpa/s and 390 °C) and thenmore » use the model to predict the phase instability of CW 316 SS under light water reactor (LWR) extended life conditions (7 × 10 –8 dpa/s and 275 °C). The model accurately predicts the γ' (Ni 3Si) precipitation evolution under fast reactor conditions and that the formation of this phase is dominated by radiation enhanced segregation. The model also predicts a carbide volume fraction that agrees well with available experimental data from a PWR reactor but is much higher than the volume fraction observed in fast reactors. We propose that radiation enhanced dissolution and/or carbon depletion at sinks that occurs at high flux could be the main sources of this inconsistency. The integrated model predicts ~1.2% volume fraction for carbide and ~3.0% volume fraction for γ' for typical CW 316 SS (with 0.054 wt% carbon) under LWR extended life conditions. Finally, this work provides valuable insights into the magnitudes and mechanisms of precipitation in irradiated CW 316 SS for nuclear applications.« less

  6. Integrated modeling of second phase precipitation in cold-worked 316 stainless steels under irradiation

    DOE PAGES

    Mamivand, Mahmood; Yang, Ying; Busby, Jeremy T.; ...

    2017-03-11

    The current work combines the Cluster Dynamics (CD) technique and CALPHAD-based precipitation modeling to address the second phase precipitation in cold-worked (CW) 316 stainless steels (SS) under irradiation at 300–400 °C. CD provides the radiation enhanced diffusion and dislocation evolution as inputs for the precipitation model. The CALPHAD-based precipitation model treats the nucleation, growth and coarsening of precipitation processes based on classical nucleation theory and evolution equations, and simulates the composition, size and size distribution of precipitate phases. We benchmark the model against available experimental data at fast reactor conditions (9.4 × 10 –7 dpa/s and 390 °C) and thenmore » use the model to predict the phase instability of CW 316 SS under light water reactor (LWR) extended life conditions (7 × 10 –8 dpa/s and 275 °C). The model accurately predicts the γ' (Ni 3Si) precipitation evolution under fast reactor conditions and that the formation of this phase is dominated by radiation enhanced segregation. The model also predicts a carbide volume fraction that agrees well with available experimental data from a PWR reactor but is much higher than the volume fraction observed in fast reactors. We propose that radiation enhanced dissolution and/or carbon depletion at sinks that occurs at high flux could be the main sources of this inconsistency. The integrated model predicts ~1.2% volume fraction for carbide and ~3.0% volume fraction for γ' for typical CW 316 SS (with 0.054 wt% carbon) under LWR extended life conditions. Finally, this work provides valuable insights into the magnitudes and mechanisms of precipitation in irradiated CW 316 SS for nuclear applications.« less

  7. Quantitative three-dimensional transrectal ultrasound (TRUS) for prostate imaging

    NASA Astrophysics Data System (ADS)

    Pathak, Sayan D.; Aarnink, Rene G.; de la Rosette, Jean J.; Chalana, Vikram; Wijkstra, Hessel; Haynor, David R.; Debruyne, Frans M. J.; Kim, Yongmin

    1998-06-01

    With the number of men seeking medical care for prostate diseases rising steadily, the need of a fast and accurate prostate boundary detection and volume estimation tool is being increasingly experienced by the clinicians. Currently, these measurements are made manually, which results in a large examination time. A possible solution is to improve the efficiency by automating the boundary detection and volume estimation process with minimal involvement from the human experts. In this paper, we present an algorithm based on SNAKES to detect the boundaries. Our approach is to selectively enhance the contrast along the edges using an algorithm called sticks and integrate it with a SNAKES model. This integrated algorithm requires an initial curve for each ultrasound image to initiate the boundary detection process. We have used different schemes to generate the curves with a varying degree of automation and evaluated its effects on the algorithm performance. After the boundaries are identified, the prostate volume is calculated using planimetric volumetry. We have tested our algorithm on 6 different prostate volumes and compared the performance against the volumes manually measured by 3 experts. With the increase in the user inputs, the algorithm performance improved as expected. The results demonstrate that given an initial contour reasonably close to the prostate boundaries, the algorithm successfully delineates the prostate boundaries in an image, and the resulting volume measurements are in close agreement with those made by the human experts.

  8. The exit-time problem for a Markov jump process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burch, N.; D'Elia, Marta; Lehoucq, Richard B.

    2014-12-15

    The purpose of our paper is to consider the exit-time problem for a finite-range Markov jump process, i.e, the distance the particle can jump is bounded independent of its location. Such jump diffusions are expedient models for anomalous transport exhibiting super-diffusion or nonstandard normal diffusion. We refer to the associated deterministic equation as a volume-constrained nonlocal diffusion equation. The volume constraint is the nonlocal analogue of a boundary condition necessary to demonstrate that the nonlocal diffusion equation is well-posed and is consistent with the jump process. A critical aspect of the analysis is a variational formulation and a recently developedmore » nonlocal vector calculus. Furthermore, this calculus allows us to pose nonlocal backward and forward Kolmogorov equations, the former equation granting the various moments of the exit-time distribution.« less

  9. Dual-Use Space Technology Transfer Conference and Exhibition. Volume 2

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar (Compiler)

    1994-01-01

    This is the second volume of papers presented at the Dual-Use Space Technology Transfer Conference and Exhibition held at the Johnson Space Center February 1-3, 1994. Possible technology transfers covered during the conference were in the areas of information access; innovative microwave and optical applications; materials and structures; marketing and barriers; intelligent systems; human factors and habitation; communications and data systems; business process and technology transfer; software engineering; biotechnology and advanced bioinstrumentation; communications signal processing and analysis; medical care; applications derived from control center data systems; human performance evaluation; technology transfer methods; mathematics, modeling, and simulation; propulsion; software analysis and decision tools; systems/processes in human support technology; networks, control centers, and distributed systems; power; rapid development; perception and vision technologies; integrated vehicle health management; automation technologies; advanced avionics; and robotics technologies.

  10. Computational modelling of the scaffold-free chondrocyte regeneration: a two-way coupling between the cell growth and local fluid flow and nutrient concentration.

    PubMed

    Hossain, Md Shakhawath; Bergstrom, D J; Chen, X B

    2015-11-01

    The in vitro chondrocyte cell culture process in a perfusion bioreactor provides enhanced nutrient supply as well as the flow-induced shear stress that may have a positive influence on the cell growth. Mathematical and computational modelling of such a culture process, by solving the coupled flow, mass transfer and cell growth equations simultaneously, can provide important insight into the biomechanical environment of a bioreactor and the related cell growth process. To do this, a two-way coupling between the local flow field and cell growth is required. Notably, most of the computational and mathematical models to date have not taken into account the influence of the cell growth on the local flow field and nutrient concentration. The present research aimed at developing a mathematical model and performing a numerical simulation using the lattice Boltzmann method to predict the chondrocyte cell growth without a scaffold on a flat plate placed inside a perfusion bioreactor. The model considers the two-way coupling between the cell growth and local flow field, and the simulation has been performed for 174 culture days. To incorporate the cell growth into the model, a control-volume-based surface growth modelling approach has been adopted. The simulation results show the variation of local fluid velocity, shear stress and concentration distribution during the culture period due to the growth of the cell phase and also illustrate that the shear stress can increase the cell volume fraction to a certain extent.

  11. Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico

    USGS Publications Warehouse

    Knutilla, R.L.; Veenhuis, J.E.

    1994-01-01

    Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.

  12. Optimization of photo-Fenton process for the treatment of prednisolone.

    PubMed

    Díez, Aida María; Ribeiro, Ana Sofia; Sanromán, Maria Angeles; Pazos, Marta

    2018-03-29

    Prednisolone is a widely prescribed synthetic glucocorticoid and stated to be toxic to a number of non-target aquatic organisms. Its extensive consumption generates environmental concern due to its detection in wastewater samples at concentrations ranged from ng/L to μg/L that requests the application of suitable degradation processes. Regarding the actual treatment options, advanced oxidation processes (AOPs) are presented as a viable alternative. In this work, the comparison in terms of pollutant removal and energetic efficiencies, between different AOPs such as Fenton (F), photo-Fenton (UV/F), photolysis (UV), and hydrogen peroxide/photolysis (UV/H 2 O 2 ), was carried out. Light diode emission (LED) was the selected source to provide the UV radiation. The UV/F process revealed the best performance, reaching high levels of both degradation and mineralization with low energy consumption. Its optimization was conducted and the operational parameters were iron and H 2 O 2 concentrations and the working volume. Using the response surface methodology with the Box-Behnken design, the effect of independent variables and their interactions on the process response were effectively evaluated. Different responses were analyzed taking into account the prednisolone removal (TOC and drug abatements) and the energy consumptions associated. The obtained model showed an improvement of the UV/F process when treating smaller volumes and when adding high concentrations of H 2 O 2 and Fe 2+ . The validation of this model was successfully carried out, having only 5% of discrepancy between the model and the experimental results. Finally, the performance of the process when having a real wastewater matrix was also tested, achieving complete mineralization and detoxification after 8 h. In addition, prednisolone degradation products were identified. Finally, the obtained low energy permitted to confirm the viability of the process.

  13. Atmospheric Electricity Hazards Analytical Model Development and Application. Volume I. Lightning Environment Modeling.

    DTIC Science & Technology

    1981-08-01

    generates essentially all of the spectral content that was measured. 120 Itp ’ IRIESTIA 0 AVE AAGE LE VE. L OF SPE’CTRAL AMAPLI TUOL I 20 9633 931’ S9...MHz, Report No. 63-538-89, IBM Federal Systems Division, 1963. Hewitt, F.J., Radar echoes from interstroke process in lightning, Proc. Phys. Soc

  14. Controls on Arctic sea ice from first-year and multi-year ice survival rates

    NASA Astrophysics Data System (ADS)

    Armour, K.; Bitz, C. M.; Hunke, E. C.; Thompson, L.

    2009-12-01

    The recent decrease in Arctic sea ice cover has transpired with a significant loss of multi-year (MY) ice. The transition to an Arctic that is populated by thinner first-year (FY) sea ice has important implications for future trends in area and volume. We develop a reduced model for Arctic sea ice with which we investigate how the survivability of FY and MY ice control various aspects of the sea-ice system. We demonstrate that Arctic sea-ice area and volume behave approximately as first-order autoregressive processes, which allows for a simple interpretation of September sea-ice in which its mean state, variability, and sensitivity to climate forcing can be described naturally in terms of the average survival rates of FY and MY ice. This model, used in concert with a sea-ice simulation that traces FY and MY ice areas to estimate the survival rates, reveals that small trends in the ice survival rates explain the decline in total Arctic ice area, and the relatively larger loss of MY ice area, over the period 1979-2006. Additionally, our model allows for a calculation of the persistence time scales of September area and volume anomalies. A relatively short memory time scale for ice area (~ 1 year) implies that Arctic ice area is nearly in equilibrium with long-term climate forcing at all times, and therefore observed trends in area are a clear indication of a changing climate. A longer memory time scale for ice volume (~ 5 years) suggests that volume can be out of equilibrium with climate forcing for long periods of time, and therefore trends in ice volume are difficult to distinguish from its natural variability. With our reduced model, we demonstrate the connection between memory time scale and sensitivity to climate forcing, and discuss the implications that a changing memory time scale has on the trajectory of ice area and volume in a warming climate. Our findings indicate that it is unlikely that a “tipping point” in September ice area and volume will be reached as the climate is further warmed. Finally, we suggest novel model validation techniques based upon comparing the characteristics of FY and MY ice within models to observations. We propose that keeping an account of FY and MY ice area within sea ice models offers a powerful new way to evaluate model projections of sea ice in a greenhouse warming climate.

  15. Mathematical modeling of the process of filling a mold during injection molding of ceramic products

    NASA Astrophysics Data System (ADS)

    Kulkov, S. N.; Korobenkov, M. V.; Bragin, N. A.

    2015-10-01

    Using the software package Fluent it have been predicted of the filling of a mold in injection molding of ceramic products is of great importance, because the strength of the final product is directly related to the presence of voids in the molding, making possible early prediction of inaccuracies in the mold prior to manufacturing. The calculations were performed in the formulation of mathematical modeling of hydrodynamic turbulent process of filling a predetermined volume of a viscous liquid. The model used to determine the filling forms evaluated the influence of density and viscosity of the feedstock, and the injection pressure on the mold filling process to predict the formation of voids in the area caused by the shape defect geometry.

  16. Application of evolutionary games to modeling carcinogenesis.

    PubMed

    Swierniak, Andrzej; Krzeslak, Michal

    2013-06-01

    We review a quite large volume of literature concerning mathematical modelling of processes related to carcinogenesis and the growth of cancer cell populations based on the theory of evolutionary games. This review, although partly idiosyncratic, covers such major areas of cancer-related phenomena as production of cytotoxins, avoidance of apoptosis, production of growth factors, motility and invasion, and intra- and extracellular signaling. We discuss the results of other authors and append to them some additional results of our own simulations dealing with the possible dynamics and/or spatial distribution of the processes discussed.

  17. Carbon nanotube based respiratory gated micro-CT imaging of a murine model of lung tumors with optical imaging correlation

    NASA Astrophysics Data System (ADS)

    Burk, Laurel M.; Lee, Yueh Z.; Heathcote, Samuel; Wang, Ko-han; Kim, William Y.; Lu, Jianping; Zhou, Otto

    2011-03-01

    Current optical imaging techniques can successfully measure tumor load in murine models of lung carcinoma but lack structural detail. We demonstrate that respiratory gated micro-CT imaging of such models gives information about structure and correlates with tumor load measurements by optical methods. Four mice with multifocal, Kras-induced tumors expressing firefly luciferase were imaged against four controls using both optical imaging and respiratory gated micro-CT. CT images of anesthetized animals were acquired with a custom CNT-based system using 30 ms x-ray pulses during peak inspiration; respiration motion was tracked with a pressure sensor beneath each animal's abdomen. Optical imaging based on the Luc+ signal correlating with tumor load was performed on a Xenogen IVIS Kinetix. Micro-CT images were post-processed using Osirix, measuring lung volume with region growing. Diameters of the largest three tumors were measured. Relationships between tumor size, lung volumes, and optical signal were compared. CT images and optical signals were obtained for all animals at two time points. In all lobes of the Kras+ mice in all images, tumors were visible; the smallest to be readily identified measured approximately 300 microns diameter. CT-derived tumor volumes and optical signals related linearly, with r=0.94 for all animals. When derived for only tumor bearing animals, r=0.3. The trend of each individual animal's optical signal tracked correctly based on the CT volumes. Interestingly, lung volumes also correlated positively with optical imaging data and tumor volume burden, suggesting active remodeling.

  18. Thermoelectric Generators for Automotive Waste Heat Recovery Systems Part I: Numerical Modeling and Baseline Model Analysis

    NASA Astrophysics Data System (ADS)

    Kumar, Sumeet; Heister, Stephen D.; Xu, Xianfan; Salvador, James R.; Meisner, Gregory P.

    2013-04-01

    A numerical model has been developed to simulate coupled thermal and electrical energy transfer processes in a thermoelectric generator (TEG) designed for automotive waste heat recovery systems. This model is capable of computing the overall heat transferred, the electrical power output, and the associated pressure drop for given inlet conditions of the exhaust gas and the available TEG volume. Multiple-filled skutterudites and conventional bismuth telluride are considered for thermoelectric modules (TEMs) for conversion of waste heat from exhaust into usable electrical power. Heat transfer between the hot exhaust gas and the hot side of the TEMs is enhanced with the use of a plate-fin heat exchanger integrated within the TEG and using liquid coolant on the cold side. The TEG is discretized along the exhaust flow direction using a finite-volume method. Each control volume is modeled as a thermal resistance network which consists of integrated submodels including a heat exchanger and a thermoelectric device. The pressure drop along the TEG is calculated using standard pressure loss correlations and viscous drag models. The model is validated to preserve global energy balances and is applied to analyze a prototype TEG with data provided by General Motors. Detailed results are provided for local and global heat transfer and electric power generation. In the companion paper, the model is then applied to consider various TEG topologies using skutterudite and bismuth telluride TEMs.

  19. Rheological changes of polyamide 12 under oscillatory shear

    NASA Astrophysics Data System (ADS)

    Mielicki, C.; Gronhoff, B.; Wortberg, J.

    2014-05-01

    Changes in material properties as well as process deviation prevent Laser Sintering (LS) technology from manufacturing of quality assured parts in a series production. In this context, the viscosity of Polyamide 12 (PA12) is assumed to possess the most significant influence, as it determines the sintering velocity, the resistance towards melt formation and the bonding strength of sintered layers. Moreover, the viscosity is directly related to the structure of the molten polymer. In particular, it has been recently reported that LS process conditions lead to structural changes of PA12 affecting viscosity and coalescence of adjacent polymer particles, i.e. melt formation significantly. Structural change of PA12 was understood as a post condensation. Its influence on viscosity was described by a time and temperature depending rheological model whereas time dependence was considered by a novel structural change shift factor which was derived from melt volume rate data. In combination with process data that was recorded using online thermal imaging, the model is suitable to control the viscosity (processability of the material) as result of material and process properties. However, as soon as laser energy is exposed to the powder bed PA12 undergoes a phase transition from solid to molten state. Above the melting point, structural change is expected to occur faster due to a higher kinetic energy and free volume of the molten polymer. Oscillatory shear results were used to study the influence of aging time and for validation of the novel structural change shift factor and its model parameters which were calibrated based on LS processing condition.

  20. Thermodynamic Models for Aqueous Alteration Coupled with Volume and Pressure Changes in Asteroids

    NASA Technical Reports Server (NTRS)

    Mironenko, M. V.; Zolotov, M. Y.

    2005-01-01

    All major classes of chondrites show signs of alteration on their parent bodies (asteroids). The prevalence of oxidation and hydration in alteration pathways implies that water was the major reactant. Sublimation and melting of water ice, generation of gases, formation of aqueous solutions, alteration of primary minerals and glasses and formation of secondary solids in interior parts of asteroids was likely to be driven by heat from the radioactive decay of short-lived radionuclides. Progress of alteration reactions should have affected masses and volumes of solids, and aqueous and gas phases. In turn, pressure evolution should have been controlled by changes in volumes and temperatures, escape processes, and production/ consumption of gases.

  1. An adaptive maneuvering logic computer program for the simulation of one-on-one air-to-air combat. Volume 1: General description

    NASA Technical Reports Server (NTRS)

    Burgin, G. H.; Fogel, L. J.; Phelps, J. P.

    1975-01-01

    A technique for computer simulation of air combat is described. Volume 1 decribes the computer program and its development in general terms. Two versions of the program exist. Both incorporate a logic for selecting and executing air combat maneuvers with performance models of specific fighter aircraft. In the batch processing version the flight paths of two aircraft engaged in interactive aerial combat and controlled by the same logic are computed. The realtime version permits human pilots to fly air-to-air combat against the adaptive maneuvering logic (AML) in Langley Differential Maneuvering Simulator (DMS). Volume 2 consists of a detailed description of the computer programs.

  2. A quantification strategy for missing bone mass in case of osteolytic bone lesions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fränzle, Andrea, E-mail: a.fraenzle@dkfz.de; Giske, Kristina; Bretschi, Maren

    Purpose: Most of the patients who died of breast cancer have developed bone metastases. To understand the pathogenesis of bone metastases and to analyze treatment response of different bone remodeling therapies, preclinical animal models are examined. In breast cancer, bone metastases are often bone destructive. To assess treatment response of bone remodeling therapies, the volumes of these lesions have to be determined during the therapy process. The manual delineation of missing structures, especially if large parts are missing, is very time-consuming and not reproducible. Reproducibility is highly important to have comparable results during the therapy process. Therefore, a computerized approachmore » is needed. Also for the preclinical research, a reproducible measurement of the lesions is essential. Here, the authors present an automated segmentation method for the measurement of missing bone mass in a preclinical rat model with bone metastases in the hind leg bones based on 3D CT scans. Methods: The affected bone structure is compared to a healthy model. Since in this preclinical rat trial the metastasis only occurs on the right hind legs, which is assured by using vessel clips, the authors use the left body side as a healthy model. The left femur is segmented with a statistical shape model which is initialised using the automatically segmented medullary cavity. The left tibia and fibula are segmented using volume growing starting at the tibia medullary cavity and stopping at the femur boundary. Masked images of both segmentations are mirrored along the median plane and transferred manually to the position of the affected bone by rigid registration. Affected bone and healthy model are compared based on their gray values. If the gray value of a voxel indicates bone mass in the healthy model and no bone in the affected bone, this voxel is considered to be osteolytic. Results: The lesion segmentations complete the missing bone structures in a reasonable way. The mean ratiov{sub r}/v{sub m} of the reconstructed bone volume v{sub r} and the healthy model bone volume v{sub m} is 1.07, which indicates a good reconstruction of the modified bone. Conclusions: The qualitative and quantitative comparison of manual and semi-automated segmentation results have shown that comparing a modified bone structure with a healthy model can be used to identify and measure missing bone mass in a reproducible way.« less

  3. STARS Conceptual Framework for Reuse Process (CFRP). Volume 1. Definition. Version 3.0

    DTIC Science & Technology

    1993-10-25

    Command, USAF Hanscom AFB, MA 01731-5000 DTIC QUALITY IN ,,P.’±U4) D Prepared by: The Boeing Company , IBM, Defense & Space Group, Federal Systems... Company , Unisys Corporation, P.O. Box 3999, MS 87-37 800 N. Frederick Pike, 12010 Sunrise Valley Drive, Seattle, WA 98124-2499 Gaithersburg, MD 20879...34 3.2.1.1 Domain Analysis and Modeling Process Category ............ 38 3.2.1.2 Domain Architecture Development Process

  4. BDM-KAT; Report of Research Results

    DTIC Science & Technology

    1990-03-31

    relations, constraints TASK PRC>CESS MODEL TASK MICRO FOR SENSOR DATA Figure 4. Computer Network for the Intelligent Control of the HIP Process...prototyped and used in preliminary knowledge acquisition for an intelligent process controller for Hot Isostatic Pressing (HIP). Both the volume of...information collected and structured and Lhe value of that knowledge for the developing controller attest to the value of the concepts implemented in BDM

  5. Volume Diffusion Growth Kinetics and Step Geometry in Crystal Growth

    NASA Technical Reports Server (NTRS)

    Mazuruk, Konstantin; Ramachandran, Narayanan

    1998-01-01

    The role of step geometry in two-dimensional stationary volume diff4sion process used in crystal growth kinetics models is investigated. Three different interface shapes: a) a planar interface, b) an equidistant hemispherical bumps train tAx interface, and c) a train of right angled steps, are used in this comparative study. The ratio of the super-saturation to the diffusive flux at the step position is used as a control parameter. The value of this parameter can vary as much as 50% for different geometries. An approximate analytical formula is derived for the right angled steps geometry. In addition to the kinetic models, this formula can be utilized in macrostep growth models. Finally, numerical modeling of the diffusive and convective transport for equidistant steps is conducted. In particular, the role of fluid flow resulting from the advancement of steps and its contribution to the transport of species to the steps is investigated.

  6. Quantitative volumetric analysis of a retinoic acid induced hypoplastic model of chick thymus, using Image-J.

    PubMed

    Haque, Ayesha; Khan, Muhammad Yunus

    2017-09-01

    To assess the total volume change in a retinoic acid-induced, hypoplastic model of a chick thymus using Image-J. This experimental study was carried out at the anatomy department of College of Physicians and Surgeons, Islamabad, Pakistan, from February 2009 to February 2010, and comprised fertilised chicken eggs. The eggs were divided into experimental group A and control group C. Group A was injected with 0.3µg of retinoic acid via yolk sac to induce a defective model of a thymus with hypoplasia. The chicks were sacrificed on embryonic day 15 and at hatching. The thymus of each animal was processed, serially sectioned and stained. The total area of each section of thymus was calculated using Image-J. This total area was summed and multiplied with the thickness of each section to obtain volume. Of the 120 eggs, there were 60(50%) in each group. Image analysis revealed a highly significant decrease in the volume of the chick thymus in the experimental group A than its matched control at the time of hatching (p=0.001). Moreover, volumetric depletion progressed with time, being substantially pronounced at hatching compared to the embryonic stage. The volume changes were significant and were effectively quantified using Image-J.

  7. Cirrus Simulations of CRYSTAL-FACE 23 July 2002 Case

    NASA Technical Reports Server (NTRS)

    Starr, David; Lin, Ruei-Fong; Demoz, Belay; Lare, Andrew

    2004-01-01

    A key objective of the Cirrus Regional Study of Tropical Anvils and Cirrus Layers - Florida Area Cirrus Experiment (CRYSTAL-FACE) is to understand relationships between the properties of tropical convective cloud systems and the properties and lifecycle of the extended cirrus anvils they produce. We report here on a case study of 23 July 2002 where a sequence of convective storms over central Florida produced an extensive anvil outflow. Our approach is to use a suitably-initialized cloud- system simulation with MM5 (Starr et al., companion paper in this volume) to define initial conditions and time-dependent forcing for a simulation of anvil evolution using a two-dimensional fine-resolution (100 m) cirrus cloud model that explicitly accounts for details of cirrus microphysical development (bin or spectra model) and fully interactive radiative processes. The cirrus model follows Lin (1997). The microphysical components are described in Lin et al. (2004) - see Lin et a1 (this volume). Meteorological conditions and observations for the 23 July case are described in Starr et al. (this volume). The goals of the present study are to evaluate how well we can simulate a cirrus anvil lifecycle, to evaluate the importance of various physical processes that operate within the anvil, and to evaluate the importance of environmental conditions in regulating anvil lifecycle. CRYSTAL-FACE produced a number of excellent case studies of anvil systems that will allow environmental factors, such as static stability or wind shear in the upper troposphere, to be examined. In the present study, we strive to assess the importance of propagating gravity waves, likely produced by the deep convection itself, and radiative processes, to anvil lifecycle and characteristics.

  8. When Spreadsheets Become Software - Quality Control Challenges and Approaches - 13360

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fountain, Stefanie A.; Chen, Emmie G.; Beech, John F.

    2013-07-01

    As part of a preliminary waste acceptance criteria (PWAC) development, several commercial models were employed, including the Hydrologic Evaluation of Landfill Performance model (HELP) [1], the Disposal Unit Source Term - Multiple Species model (DUSTMS) [2], and the Analytical Transient One, Two, and Three-Dimensional model (AT123D) [3]. The results of these models were post-processed in MS Excel spreadsheets to convert the model results to alternate units, compare the groundwater concentrations to the groundwater concentration thresholds, and then to adjust the waste contaminant masses (based on average concentration over the waste volume) as needed in an attempt to achieve groundwater concentrationsmore » at the limiting point of assessment that would meet the compliance concentrations while maximizing the potential use of the landfill (i.e., maximizing the volume of projected waste being generated that could be placed in the landfill). During the course of the PWAC calculation development, one of the Microsoft (MS) Excel spreadsheets used to post-process the results of the commercial model packages grew to include more than 575,000 formulas across 18 worksheets. This spreadsheet was used to assess six base scenarios as well as nine uncertainty/sensitivity scenarios. The complexity of the spreadsheet resulted in the need for a rigorous quality control (QC) procedure to verify data entry and confirm the accuracy of formulas. (authors)« less

  9. Finite-element modelling of physics-based hillslope hydrology, Keith Beven, and beyond

    USGS Publications Warehouse

    Loague, Keith; Ebel, Brian A.

    2016-01-01

    Keith Beven is a voice of reason on the intelligent use of models and the subsequent acknowledgement/assessment of the uncertainties associated with environmental simula-tion. With several books and hundreds of papers, Keith’s work is widespread, well known, and highly referenced. Four of Keith’s most notable contributions are the iconic TOPMODEL (Beven and Kirkby, 1979), classic papers on macropores and preferential flow (Beven and Germann, 1982, 2013), two editions of the rainfall-runoff modelling bible (Beven, 2000a, 2012), and the selection/commentary for the first volume from the Benchmark Papers in Hydrology series (Beven, 2006b). Remarkably, the thirty-one papers in his benchmark volume, entitled Streamflow Generation Processes, are not tales of modelling wizardry but describe measurements designed to better understand the dynamics of near-surface systems (quintessential Keith). The impetus for this commentary is Keith’sPhD research (Beven, 1975), where he developed a new finite-element model and conducted concept-development simu-lations based upon the processes identified by, for example, Richards (1931), Horton (1933), Hubbert (1940), Hewlett and Hibbert (1963), and Dunne and Black (1970a,b). Readers not familiar with the different mechanisms of streamflow generation are referred to Dunne (1978).

  10. Invariance in the recurrence of large returns and the validation of models of price dynamics

    NASA Astrophysics Data System (ADS)

    Chang, Lo-Bin; Geman, Stuart; Hsieh, Fushing; Hwang, Chii-Ruey

    2013-08-01

    Starting from a robust, nonparametric definition of large returns (“excursions”), we study the statistics of their occurrences, focusing on the recurrence process. The empirical waiting-time distribution between excursions is remarkably invariant to year, stock, and scale (return interval). This invariance is related to self-similarity of the marginal distributions of returns, but the excursion waiting-time distribution is a function of the entire return process and not just its univariate probabilities. Generalized autoregressive conditional heteroskedasticity (GARCH) models, market-time transformations based on volume or trades, and generalized (Lévy) random-walk models all fail to fit the statistical structure of excursions.

  11. Statistical Model of Dynamic Markers of the Alzheimer's Pathological Cascade.

    PubMed

    Balsis, Steve; Geraci, Lisa; Benge, Jared; Lowe, Deborah A; Choudhury, Tabina K; Tirso, Robert; Doody, Rachelle S

    2018-05-05

    Alzheimer's disease (AD) is a progressive disease reflected in markers across assessment modalities, including neuroimaging, cognitive testing, and evaluation of adaptive function. Identifying a single continuum of decline across assessment modalities in a single sample is statistically challenging because of the multivariate nature of the data. To address this challenge, we implemented advanced statistical analyses designed specifically to model complex data across a single continuum. We analyzed data from the Alzheimer's Disease Neuroimaging Initiative (ADNI; N = 1,056), focusing on indicators from the assessments of magnetic resonance imaging (MRI) volume, fluorodeoxyglucose positron emission tomography (FDG-PET) metabolic activity, cognitive performance, and adaptive function. Item response theory was used to identify the continuum of decline. Then, through a process of statistical scaling, indicators across all modalities were linked to that continuum and analyzed. Findings revealed that measures of MRI volume, FDG-PET metabolic activity, and adaptive function added measurement precision beyond that provided by cognitive measures, particularly in the relatively mild range of disease severity. More specifically, MRI volume, and FDG-PET metabolic activity become compromised in the very mild range of severity, followed by cognitive performance and finally adaptive function. Our statistically derived models of the AD pathological cascade are consistent with existing theoretical models.

  12. A numerical model of two-phase flow at the micro-scale using the volume-of-fluid method

    NASA Astrophysics Data System (ADS)

    Shams, Mosayeb; Raeini, Ali Q.; Blunt, Martin J.; Bijeljic, Branko

    2018-03-01

    This study presents a simple and robust numerical scheme to model two-phase flow in porous media where capillary forces dominate over viscous effects. The volume-of-fluid method is employed to capture the fluid-fluid interface whose dynamics is explicitly described based on a finite volume discretization of the Navier-Stokes equations. Interfacial forces are calculated directly on reconstructed interface elements such that the total curvature is preserved. The computed interfacial forces are explicitly added to the Navier-Stokes equations using a sharp formulation which effectively eliminates spurious currents. The stability and accuracy of the implemented scheme is validated on several two- and three-dimensional test cases, which indicate the capability of the method to model two-phase flow processes at the micro-scale. In particular we show how the co-current flow of two viscous fluids leads to greatly enhanced flow conductance for the wetting phase in corners of the pore space, compared to a case where the non-wetting phase is an inviscid gas.

  13. Process-based selection of copula types for flood peak-volume relationships in Northwest Austria: a case study

    NASA Astrophysics Data System (ADS)

    Kohnová, Silvia; Gaál, Ladislav; Bacigál, Tomáš; Szolgay, Ján; Hlavčová, Kamila; Valent, Peter; Parajka, Juraj; Blöschl, Günter

    2016-12-01

    The case study aims at selecting optimal bivariate copula models of the relationships between flood peaks and flood volumes from a regional perspective with a particular focus on flood generation processes. Besides the traditional approach that deals with the annual maxima of flood events, the current analysis also includes all independent flood events. The target region is located in the northwest of Austria; it consists of 69 small and mid-sized catchments. On the basis of the hourly runoff data from the period 1976- 2007, independent flood events were identified and assigned to one of the following three types of flood categories: synoptic floods, flash floods and snowmelt floods. Flood events in the given catchment are considered independent when they originate from different synoptic situations. Nine commonly-used copula types were fitted to the flood peak - flood volume pairs at each site. In this step, two databases were used: i) a process-based selection of all the independent flood events (three data samples at each catchment) and ii) the annual maxima of the flood peaks and the respective flood volumes regardless of the flood processes (one data sample per catchment). The goodness-of-fit of the nine copula types was examined on a regional basis throughout all the catchments. It was concluded that (1) the copula models for the flood processes are discernible locally; (2) the Clayton copula provides an unacceptable performance for all three processes as well as in the case of the annual maxima; (3) the rejection of the other copula types depends on the flood type and the sample size; (4) there are differences in the copulas with the best fits: for synoptic and flash floods, the best performance is associated with the extreme value copulas; for snowmelt floods, the Frank copula fits the best; while in the case of the annual maxima, no firm conclusion could be made due to the number of copulas with similarly acceptable overall performances. The general conclusion from this case study is that treating flood processes separately is beneficial; however, the usually available sample size in such real life studies is not sufficient to give generally valid recommendations for engineering design tasks.

  14. Upgrade to Ion Exchange Modeling for Removal of Technetium from Hanford Waste Using SuperLig® 639 Resin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamm, L.; Smith, F.; Aleman, S.

    2013-05-16

    This report documents the development and application of computer models to describe the sorption of pertechnetate [TcO₄⁻], and its surrogate perrhenate [ReO₄⁻], on SuperLig® 639 resin. Two models have been developed: 1) A thermodynamic isotherm model, based on experimental data, that predicts [TcO₄⁻] and [ReO₄⁻] sorption as a function of solution composition and temperature and 2) A column model that uses the isotherm calculated by the first model to simulate the performance of a full-scale sorption process. The isotherm model provides a synthesis of experimental data collected from many different sources to give a best estimate prediction of the behaviormore » of the pertechnetate-SuperLig® 639 system and an estimate of the uncertainty in this prediction. The column model provides a prediction of the expected performance of the plant process by determining the volume of waste solution that can be processed based on process design parameters such as column size, flow rate and resin physical properties.« less

  15. VARTM Model Development and Verification

    NASA Technical Reports Server (NTRS)

    Cano, Roberto J. (Technical Monitor); Dowling, Norman E.

    2004-01-01

    In this investigation, a comprehensive Vacuum Assisted Resin Transfer Molding (VARTM) process simulation model was developed and verified. The model incorporates resin flow through the preform, compaction and relaxation of the preform, and viscosity and cure kinetics of the resin. The computer model can be used to analyze the resin flow details, track the thickness change of the preform, predict the total infiltration time and final fiber volume fraction of the parts, and determine whether the resin could completely infiltrate and uniformly wet out the preform.

  16. TERSSE: Definition of the Total Earth Resources System for the Shuttle Era. Volume 7: User Models: A System Assessment

    NASA Technical Reports Server (NTRS)

    1974-01-01

    User models defined as any explicit process or procedure used to transform information extracted from remotely sensed data into a form useful as a resource management information input are discussed. The role of the user models as information, technological, and operations interfaces between the TERSSE and the resource managers is emphasized. It is recommended that guidelines and management strategies be developed for a systems approach to user model development.

  17. Linking Volcano Infrasound Observations to Conduit Processes for Vulcanian Eruptions

    NASA Astrophysics Data System (ADS)

    Watson, L. M.; Dunham, E. M.; Almquist, M.; Mattsson, K.; Ampong, K.

    2016-12-01

    Volcano infrasound observations have been used to infer a range of eruption parameters, such as volume flux and exit velocity, with the majority of work focused on subaerial processes. Here, we propose using infrasound observations to investigate the subsurface processes of the volcanic system. We develop a one-dimensional model of the volcanic system, coupling an unsteady conduit model to a description of a volcanic jet with sound waves generated by the expansion of the jet. The conduit model describes isothermal two-phase flow with no relative motion between the phases. We are currently working on including crystals and adding conservation of energy to the governing equations. The model captures the descent of the fragmentation front into the conduit and approaches a steady state solution with choked flow at the vent. The descending fragmentation front influences the time history of mass discharge from the vent, which is linked to the infrasound signal through the volcanic jet model. The jet model is coupled to the conduit by conservation of mass, momentum, and energy. We compare simulation results for a range of models of the volcanic jet, ranging in complexity from assuming conservation of volume, as has been done in some previous infrasound studies, to solving the Euler equations for the surrounding compressible atmosphere and accounting for entrainment. Our model is designed for short-lived, impulsive Vulcanian eruptions, such as those seen at Sakurajima Volcano, with activity triggered by a sudden drop in pressure at the top of the conduit. The intention is to compare the simulated signals to observations and to devise an inverse procedure to enable inversion for conduit properties.

  18. Continuously graded extruded polymer composites for energetic applications fabricated using twin-screw extrusion processing technology

    NASA Astrophysics Data System (ADS)

    Gallant, Frederick M.

    A novel method of fabricating functionally graded extruded composite materials is proposed for propellant applications using the technology of continuous processing with a Twin-Screw Extruder. The method is applied to the manufacturing of grains for solid rocket motors in an end-burning configuration with an axial gradient in ammonium perchlorate volume fraction and relative coarse/fine particle size distributions. The fabrication of functionally graded extruded polymer composites with either inert or energetic ingredients has yet to be investigated. The lack of knowledge concerning the processing of these novel materials has necessitated that a number of research issues be addressed. Of primary concern is characterizing and modeling the relationship between the extruder screw geometry, transient processing conditions, and the gradient architecture that evolves in the extruder. Recent interpretations of the Residence Time Distributions (RTDs) and Residence Volume Distributions (RVDs) for polymer composites in the TSE are used to develop new process models for predicting gradient architectures in the direction of extrusion. An approach is developed for characterizing the sections of the extrudate using optical, mechanical, and compositional analysis to determine the gradient architectures. The effects of processing on the burning rate properties of extruded energetic polymer composites are characterized for homogeneous formulations over a range of compositions to determine realistic gradient architectures for solid rocket motor applications. The new process models and burning rate properties that have been characterized in this research effort will be the basis for an inverse design procedure that is capable of determining gradient architectures for grains in solid rocket motors that possess tailored burning rate distributions that conform to user-defined performance specifications.

  19. Lung volume quantified by MRI reflects extracellular-matrix deposition and altered pulmonary function in bleomycin models of fibrosis: effects of SOM230.

    PubMed

    Egger, Christine; Gérard, Christelle; Vidotto, Nella; Accart, Nathalie; Cannet, Catherine; Dunbar, Andrew; Tigani, Bruno; Piaia, Alessandro; Jarai, Gabor; Jarman, Elizabeth; Schmid, Herbert A; Beckmann, Nicolau

    2014-06-15

    Idiopathic pulmonary fibrosis is a progressive and lethal disease, characterized by loss of lung elasticity and alveolar surface area, secondary to alveolar epithelial cell injury, reactive inflammation, proliferation of fibroblasts, and deposition of extracellular matrix. The effects of oropharyngeal aspiration of bleomycin in Sprague-Dawley rats and C57BL/6 mice, as well as of intratracheal administration of ovalbumin to actively sensitized Brown Norway rats on total lung volume as assessed noninvasively by magnetic resonance imaging (MRI) were investigated here. Lung injury and volume were quantified by using nongated or respiratory-gated MRI acquisitions [ultrashort echo time (UTE) or gradient-echo techniques]. Lung function of bleomycin-challenged rats was examined additionally using a flexiVent system. Postmortem analyses included histology of collagen and hydroxyproline assays. Bleomycin induced an increase of MRI-assessed total lung volume, lung dry and wet weights, and hydroxyproline content as well as collagen amount. In bleomycin-treated rats, gated MRI showed an increased volume of the lung in the inspiratory and expiratory phases of the respiratory cycle and a temporary decrease of tidal volume. Decreased dynamic lung compliance was found in bleomycin-challenged rats. Bleomycin-induced increase of MRI-detected lung volume was consistent with tissue deposition during fibrotic processes resulting in decreased lung elasticity, whereas influences by edema or emphysema could be excluded. In ovalbumin-challenged rats, total lung volume quantified by MRI remained unchanged. The somatostatin analog, SOM230, was shown to have therapeutic effects on established bleomycin-induced fibrosis in rats. This work suggests MRI-detected total lung volume as readout for tissue-deposition in small rodent bleomycin models of pulmonary fibrosis. Copyright © 2014 the American Physiological Society.

  20. Physics of self-aligned assembly at room temperature

    NASA Astrophysics Data System (ADS)

    Dubey, V.; Beyne, E.; Derakhshandeh, J.; De Wolf, I.

    2018-01-01

    Self-aligned assembly, making use of capillary forces, is considered as an alternative to active alignment during thermo-compression bonding of Si chips in the 3D heterogeneous integration process. Various process parameters affect the alignment accuracy of the chip over the patterned binding site on a substrate/carrier wafer. This paper discusses the chip motion due to wetting and capillary force using a transient coupled physics model for the two regimes (that is, wetting regime and damped oscillatory regime) in the temporal domain. Using the transient model, the effect of the volume of the liquid and the placement accuracy of the chip on the alignment force is studied. The capillary time (that is, the time it takes for the chip to reach its mean position) for the chip is directly proportional to the placement offset and inversely proportional to the viscosity. The time constant of the harmonic oscillations is directly proportional to the gap between the chips due to the volume of the fluid. The predicted behavior from transient simulations is next experimentally validated and it is confirmed that the liquid volume and the initial placement affect the final alignment accuracy of the top chip on the bottom substrate. With statistical experimental data, we demonstrate an alignment accuracy reaching <1 μm.

  1. A finite-volume ELLAM for three-dimensional solute-transport modeling

    USGS Publications Warehouse

    Russell, T.F.; Heberton, C.I.; Konikow, Leonard F.; Hornberger, G.Z.

    2003-01-01

    A three-dimensional finite-volume ELLAM method has been developed, tested, and successfully implemented as part of the U.S. Geological Survey (USGS) MODFLOW-2000 ground water modeling package. It is included as a solver option for the Ground Water Transport process. The FVELLAM uses space-time finite volumes oriented along the streamlines of the flow field to solve an integral form of the solute-transport equation, thus combining local and global mass conservation with the advantages of Eulerian-Lagrangian characteristic methods. The USGS FVELLAM code simulates solute transport in flowing ground water for a single dissolved solute constituent and represents the processes of advective transport, hydrodynamic dispersion, mixing from fluid sources, retardation, and decay. Implicit time discretization of the dispersive and source/sink terms is combined with a Lagrangian treatment of advection, in which forward tracking moves mass to the new time level, distributing mass among destination cells using approximate indicator functions. This allows the use of large transport time increments (large Courant numbers) with accurate results, even for advection-dominated systems (large Peclet numbers). Four test cases, including comparisons with analytical solutions and benchmarking against other numerical codes, are presented that indicate that the FVELLAM can usually yield excellent results, even if relatively few transport time steps are used, although the quality of the results is problem-dependent.

  2. Bootstrapping Least Squares Estimates in Biochemical Reaction Networks

    PubMed Central

    Linder, Daniel F.

    2015-01-01

    The paper proposes new computational methods of computing confidence bounds for the least squares estimates (LSEs) of rate constants in mass-action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large volume limit of a reaction network, to network’s partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods. PMID:25898769

  3. Strain heating in process zones; implications for metamorphism and partial melting in the lithosphere

    NASA Astrophysics Data System (ADS)

    Devès, Maud H.; Tait, Stephen R.; King, Geoffrey C. P.; Grandin, Raphaël

    2014-05-01

    Since the late 1970s, most earth scientists have discounted the plausibility of melting by shear-strain heating because temperature-dependent creep rheology leads to negative feedback and self-regulation. This paper presents a new model of distributed shear-strain heating that can account for the genesis of large volumes of magmas in both the crust and the mantle of the lithosphere. The kinematic (geometry and rates) frustration associated with incompatible fault junctions (e.g. triple-junction) prevents localisation of all strain on the major faults. Instead, deformation distributes off the main faults forming a large process zone that deforms still at high rates under both brittle and ductile conditions. The increased size of the shear-heated region minimises conductive heat loss, compared with that commonly associated with narrow shear zones, thus promoting strong heating and melting under reasonable rheological assumptions. Given the large volume of the heated zone, large volumes of melt can be generated even at small melt fractions.

  4. Enumerating Sparse Organisms in Ships’ Ballast Water: Why Counting to 10 Is Not So Easy

    PubMed Central

    2011-01-01

    To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships’ ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed. PMID:21434685

  5. Classification of SD-OCT volumes for DME detection: an anomaly detection approach

    NASA Astrophysics Data System (ADS)

    Sankar, S.; Sidibé, D.; Cheung, Y.; Wong, T. Y.; Lamoureux, E.; Milea, D.; Meriaudeau, F.

    2016-03-01

    Diabetic Macular Edema (DME) is the leading cause of blindness amongst diabetic patients worldwide. It is characterized by accumulation of water molecules in the macula leading to swelling. Early detection of the disease helps prevent further loss of vision. Naturally, automated detection of DME from Optical Coherence Tomography (OCT) volumes plays a key role. To this end, a pipeline for detecting DME diseases in OCT volumes is proposed in this paper. The method is based on anomaly detection using Gaussian Mixture Model (GMM). It starts with pre-processing the B-scans by resizing, flattening, filtering and extracting features from them. Both intensity and Local Binary Pattern (LBP) features are considered. The dimensionality of the extracted features is reduced using PCA. As the last stage, a GMM is fitted with features from normal volumes. During testing, features extracted from the test volume are evaluated with the fitted model for anomaly and classification is made based on the number of B-scans detected as outliers. The proposed method is tested on two OCT datasets achieving a sensitivity and a specificity of 80% and 93% on the first dataset, and 100% and 80% on the second one. Moreover, experiments show that the proposed method achieves better classification performances than other recently published works.

  6. Examining the effects of urban agglomeration polders on flood events in Qinhuai River basin, China with HEC-HMS model.

    PubMed

    Gao, Yuqin; Yuan, Yu; Wang, Huaizhi; Schmidt, Arthur R; Wang, Kexuan; Ye, Liu

    2017-05-01

    The urban agglomeration polders type of flood control pattern is a general flood control pattern in the eastern plain area and some of the secondary river basins in China. A HEC-HMS model of Qinhuai River basin based on the flood control pattern was established for simulating basin runoff, examining the impact of urban agglomeration polders on flood events, and estimating the effects of urbanization on hydrological processes of the urban agglomeration polders in Qinhuai River basin. The results indicate that the urban agglomeration polders could increase the peak flow and flood volume. The smaller the scale of the flood, the more significant the influence of the polder was to the flood volume. The distribution of the city circle polder has no obvious impact on the flood volume, but has effect on the peak flow. The closer the polder is to basin output, the smaller the influence it has on peak flows. As the level of urbanization gradually improving of city circle polder, flood volumes and peak flows gradually increase compared to those with the current level of urbanization (the impervious rate was 20%). The potential change in flood volume and peak flow with increasing impervious rate shows a linear relationship.

  7. Enumerating sparse organisms in ships' ballast water: why counting to 10 is not so easy.

    PubMed

    Miller, A Whitman; Frazier, Melanie; Smith, George E; Perry, Elgin S; Ruiz, Gregory M; Tamburri, Mario N

    2011-04-15

    To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships' ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed.

  8. System Engineering Concept Demonstration, Effort Summary. Volume 1

    DTIC Science & Technology

    1992-12-01

    involve only the system software, user frameworks and user tools. U •User Tool....s , Catalyst oExternal 00 Computer Framwork P OSystems • •~ Sysytem...analysis, synthesis, optimization, conceptual design of Catalyst. The paper discusses the definition, design, test, and evaluation; operational concept...This approach will allow system engineering The conceptual requirements for the Process Model practitioners to recognize and tailor the model. This

  9. Defense AT&L (Volume 35, Number 6, November-December 2006)

    DTIC Science & Technology

    2006-12-01

    our view of reality is inherently unstable. That is, when we realize our current cultural preferences, frame- works, mental models , doctrines...manufacturing rules and procedures (con- trols). IDEF0 function modeling has since been adopted for other applications such as business process... negotiating busy intersections, and avoiding obstacles. The DARPA Grand Challenge Web site <http://www.darpa.mil/grandchallenge>is the primary resource for

  10. The M-type stars

    NASA Technical Reports Server (NTRS)

    Johnson, Hollis Ralph; Querci, Francois R.; Jordan, Stuart (Editor); Thomas, Richard (Editor); Goldberg, Leo; Pecker, Jean-Claude

    1987-01-01

    The papers in this volume cover the following topics: (1) basic properties and photometric variability of M and related stars; (2) spectroscopy and nonthermal processes; (3) circumstellar radio molecular lines; (4) circumstellar shells, the formation of grains, and radiation transfer; (5) mass loss; (6) circumstellar chemistry; (7) thermal atmospheric models; (8) quasi-thermal models; (9) observations on the atmospheres of M dwarfs; and (1) theoretical work on M dwarfs.

  11. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.

  12. Tissue Integration of a Volume-Stable Collagen Matrix in an Experimental Soft Tissue Augmentation Model.

    PubMed

    Ferrantino, Luca; Bosshardt, Dieter; Nevins, Myron; Santoro, Giacomo; Simion, Massimo; Kim, David

    Reducing the need for a connective tissue graft by using an efficacious biomaterial is an important task for dental professionals and patients. This experimental study aimed to test the soft tissue response to a volume-stable new collagen matrix. The device demonstrated good stability during six different time points ranging from 0 to 90 days of healing with no alteration of the wound-healing processes. The 90-day histologic specimen demonstrates eventual replacement of most of the matrix with new connective tissue fibers.

  13. Journal of Air Transportation, Volume 9, No. 2. Volume 9, No. 2

    NASA Technical Reports Server (NTRS)

    Bowen, Brent (Editor); Kabashkin, Igor (Editor); Gudmundsson, Sveinn Vidar (Editor); Scarpellini, Nanette (Editor)

    2004-01-01

    The following articles from the "Journal of Air Transportation" were processed: Future Requirements and Concepts for Cabins of Blended Wing Body Configurations:A Scenario Approach; Future Scenarios for the European Airline Industry: A Marketing-Based Perspective; An Application of the Methodology for Assessment of the Sustainability of the Air Transport System; Modeling the Effect of Enlarged Seating Room on Passenger Preferences of Domestic Airlines in Taiwan; Developing a Fleet Standardization Index for Airline Pricing; and Future Airport Capacity Utilization in Germany: Peaked Congestion and/or Idle Capacity).

  14. Physical Interpretation of Mixing Diagrams

    NASA Astrophysics Data System (ADS)

    Khain, Alexander; Pinsky, Mark; Magaritz-Ronen, L.

    2018-01-01

    Type of mixing at cloud edges is often determined by means of mixing diagrams showing the dependence of normalized cube of the mean volume radius on the dilution level. The mixing diagrams correspond to the final equilibrium state of mixing between two air volumes. While interpreting in situ measurements, scattering diagrams are plotted in which normalized droplet concentration is used instead of dilution level. Utilization of such scattering diagrams for interpretation of in situ observations faces significant difficulties and often leads to misinterpretation of the mixing process and to uncertain conclusions concerning the mixing type. In this study we analyze the scattering diagrams obtained by means of a Lagrangian-Eulerian model of a stratocumulus cloud. The model consists of 2,000 interacting Largangian parcels which mix with their neighbors during their motion in the atmospheric boundary layer. In the diagram, each parcel is denoted by a point. Changes of microphysical parameters of the parcel are represented by movements of the point in the scattering diagram. The method of plotting the scattering diagrams using the model is in many aspects similar to that used in in situ measurements. It is shown that a scattering diagram shows snapshots of a transient mixing process. The location of points in the scattering diagrams reflects largely the history and the origin of air parcels. Location of points on scattering diagram characterizes intensity of entrainment, and different parameters of droplet size distributions (DSDs) like concentration, mean volume (or effective) radius, and DSD width.

  15. Interactive Computing and Processing of NASA Land Surface Observations Using Google Earth Engine

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew; Burks, Jason; Bell, Jordan

    2016-01-01

    Google's Earth Engine offers a "big data" approach to processing large volumes of NASA and other remote sensing products. h\\ps://earthengine.google.com/ Interfaces include a Javascript or Python-based API, useful for accessing and processing over large periods of record for Landsat and MODIS observations. Other data sets are frequently added, including weather and climate model data sets, etc. Demonstrations here focus on exploratory efforts to perform land surface change detection related to severe weather, and other disaster events.

  16. A novel contact model of piezoelectric traveling wave rotary ultrasonic motors with the finite volume method.

    PubMed

    Renteria-Marquez, I A; Renteria-Marquez, A; Tseng, B T L

    2018-06-06

    The operating principle of the piezoelectric traveling wave rotary ultrasonic motor is based on two energy conversion processes: the generation of the stator traveling wave and the rectification of the stator movement through the stator-rotor contact mechanism. This paper presents a methodology to model in detail the stator-rotor contact interface of these motors. A contact algorithm that couples a model of the stator which is discretized with the finite volume method and an analytical model of the rotor is presented. The outputs of the proposed model are the normal and tangential force distribution produced at the stator-rotor contact interface, contact length, height and shape of the stator traveling wave and rotor speed. The torque-speed characteristic of the USR60 is calculated with the proposed model, and the results of the model are compared versus the real torque-speed of the motor. A good agreement between the proposed model results and the torque-speed characteristic of the USR60 was observed. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. New Concepts and New Processes in Special Recreation. Institute Report #1. National Institute on New Models of Community Based Recreation and Leisure Programs and Services for Handicapped Children and Youth.

    ERIC Educational Resources Information Center

    Nesbitt, John A.

    One of nine volumes in a series on recreation for the handicapped, the report examines new concepts and processes in special recreation. Among 10 topics considered are the following: goals of community recreation for the handicapped; delivery system; guidelines for management and development; local community leadership; planning, cooperation and…

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Padaki, S.; Drzal, L.T.

    The consolidation process in composites made out of powder impregnated tapes differs from that of other material forms because of the distribution of fiber and matrix in the unconsolidated state. A number of factors (e.g. time, pressure, particle size, volume fraction and viscosity) affect the efficiency of the consolidation of these tapes. This paper describes the development of a mathematical process model that describes the best set of parameters required for the consolidation of a given prepreg tape.

  19. OCT-based full crystalline lens shape change during accommodation in vivo.

    PubMed

    Martinez-Enriquez, Eduardo; Pérez-Merino, Pablo; Velasco-Ocana, Miriam; Marcos, Susana

    2017-02-01

    The full shape of the accommodating crystalline lens was estimated using custom three-dimensional (3-D) spectral OCT and image processing algorithms. Automatic segmentation and distortion correction were used to construct 3-D models of the lens region visible through the pupil. The lens peripheral region was estimated with a trained and validated parametric model. Nineteen young eyes were measured at 0-6 D accommodative demands in 1.5 D steps. Lens volume, surface area, diameter, and equatorial plane position were automatically quantified. Lens diameter & surface area correlated negatively and equatorial plane position positively with accommodation response. Lens volume remained constant and surface area decreased with accommodation, indicating that the lens material is incompressible and the capsular bag elastic.

  20. OCT-based full crystalline lens shape change during accommodation in vivo

    PubMed Central

    Martinez-Enriquez, Eduardo; Pérez-Merino, Pablo; Velasco-Ocana, Miriam; Marcos, Susana

    2017-01-01

    The full shape of the accommodating crystalline lens was estimated using custom three-dimensional (3-D) spectral OCT and image processing algorithms. Automatic segmentation and distortion correction were used to construct 3-D models of the lens region visible through the pupil. The lens peripheral region was estimated with a trained and validated parametric model. Nineteen young eyes were measured at 0-6 D accommodative demands in 1.5 D steps. Lens volume, surface area, diameter, and equatorial plane position were automatically quantified. Lens diameter & surface area correlated negatively and equatorial plane position positively with accommodation response. Lens volume remained constant and surface area decreased with accommodation, indicating that the lens material is incompressible and the capsular bag elastic. PMID:28270993

  1. Atmospheric pressure plasma processing of polymeric materials utilizing close proximity indirect exposure

    DOEpatents

    Paulauskas, Felix L.; Bonds, Truman

    2016-09-20

    A plasma treatment method that includes providing treatment chamber including an intermediate heating volume and an interior treatment volume. The interior treatment volume contains an electrode assembly for generating a plasma and the intermediate heating volume heats the interior treatment volume. A work piece is traversed through the treatment chamber. A process gas is introduced to the interior treatment volume of the treatment chamber. A plasma is formed with the electrode assembly from the process gas, wherein a reactive species of the plasma is accelerated towards the fiber tow by flow vortices produced in the interior treatment volume by the electrode assembly.

  2. A model for prediction of STOVL ejector dynamics

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1989-01-01

    A semi-empirical control-volume approach to ejector modeling for transient performance prediction is presented. This new approach is motivated by the need for a predictive real-time ejector sub-system simulation for Short Take-Off Verticle Landing (STOVL) integrated flight and propulsion controls design applications. Emphasis is placed on discussion of the approximate characterization of the mixing process central to thrust augmenting ejector operation. The proposed ejector model suggests transient flow predictions are possible with a model based on steady-flow data. A practical test case is presented to illustrate model calibration.

  3. Noise in Nonlinear Dynamical Systems 3 Volume Paperback Set

    NASA Astrophysics Data System (ADS)

    Moss, Frank; McClintock, P. V. E.

    2011-11-01

    Volume 1: List of contributors; Preface; Introduction to volume one; 1. Noise-activated escape from metastable states: an historical view Rolf Landauer; 2. Some Markov methods in the theory of stochastic processes in non-linear dynamical systems R. L. Stratonovich; 3. Langevin equations with coloured noise J. M. Sancho and M. San Miguel; 4. First passage time problems for non-Markovian processes Katja Lindenberg, Bruce J. West and Jaume Masoliver; 5. The projection approach to the Fokker-Planck equation: applications to phenomenological stochastic equations with coloured noises Paolo Grigolini; 6. Methods for solving Fokker-Planck equations with applications to bistable and periodic potentials H. Risken and H. D. Vollmer; 7. Macroscopic potentials, bifurcations and noise in dissipative systems Robert Graham; 8. Transition phenomena in multidimensional systems - models of evolution W. Ebeling and L. Schimansky-Geier; 9. Coloured noise in continuous dynamical systems: a functional calculus approach Peter Hanggi; Appendix. On the statistical treatment of dynamical systems L. Pontryagin, A. Andronov and A. Vitt; Index. Volume 2: List of contributors; Preface; Introduction to volume two; 1. Stochastic processes in quantum mechanical settings Ronald F. Fox; 2. Self-diffusion in non-Markovian condensed-matter systems Toyonori Munakata; 3. Escape from the underdamped potential well M. Buttiker; 4. Effect of noise on discrete dynamical systems with multiple attractors Edgar Knobloch and Jeffrey B. Weiss; 5. Discrete dynamics perturbed by weak noise Peter Talkner and Peter Hanggi; 6. Bifurcation behaviour under modulated control parameters M. Lucke; 7. Period doubling bifurcations: what good are they? Kurt Wiesenfeld; 8. Noise-induced transitions Werner Horsthemke and Rene Lefever; 9. Mechanisms for noise-induced transitions in chemical systems Raymond Kapral and Edward Celarier; 10. State selection dynamics in symmetry-breaking transitions Dilip K. Kondepudi; 11. Noise in a ring-laser gyroscope K. Vogel, H. Risken and W. Schleich; 12. Control of noise and applications to optical systems L. A. Lugiato, G. Broggi, M. Merri and M. A. Pernigo; 13. Transition probabilities and spectral density of fluctuations of noise driven bistable systems M. I. Dykman, M. A. Krivoglaz and S. M. Soskin; Index. Volume 3: List of contributors; Preface; Introduction to volume three; 1. The effects of coloured quadratic noise on a turbulent transition in liquid He II J. T. Tough; 2. Electrohydrodynamic instability of nematic liquid crystals: growth process and influence of noise S. Kai; 3. Suppression of electrohydrodynamic instabilities by external noise Helmut R. Brand; 4. Coloured noise in dye laser fluctuations R. Roy, A. W. Yu and S. Zhu; 5. Noisy dynamics in optically bistable systems E. Arimondo, D. Hennequin and P. Glorieux; 6. Use of an electronic model as a guideline in experiments on transient optical bistability W. Lange; 7. Computer experiments in nonlinear stochastic physics Riccardo Mannella; 8. Analogue simulations of stochastic processes by means of minimum component electronic devices Leone Fronzoni; 9. Analogue techniques for the study of problems in stochastic nonlinear dynamics P. V. E. McClintock and Frank Moss; Index.

  4. Modulation of red cell mass by neocytolysis in space and on Earth

    NASA Technical Reports Server (NTRS)

    Rice, L.; Alfrey, C. P.

    2000-01-01

    Astronauts predictably experience anemia after return from space. Upon entering microgravity, the blood volume in the extremities pools centrally and plasma volume decreases, causing plethora and erythropoietin suppression. There ensues neocytolysis, selective hemolysis of the youngest circulating red cells, allowing rapid adaptation to the space environment but becoming maladaptive on re-entry to a gravitational field. The existence of this physiologic control process was confirmed in polycythemic high-altitude dwellers transported to sea level. Pathologic neocytolysis contributes to the anemia of renal failure. Understanding the process has implications for optimizing erythropoietin-dosing schedules and the therapy of other human disorders. Human and rodent models of neocytolysis are being created to help find out how interactions between endothelial cells, reticuloendothelial phagocytes and young erythrocytes are altered, and to shed light on the expression of surface adhesion molecules underlying this process. Thus, unraveling a problem for space travelers has uncovered a physiologic process controlling the red cell mass that can be applied to human disorders on Earth.

  5. Application of a 2-step process for the biological treatment of sulfidic spent caustics.

    PubMed

    de Graaff, Marco; Klok, Johannes B M; Bijmans, Martijn F M; Muyzer, Gerard; Janssen, Albert J H

    2012-03-01

    This research demonstrates the feasibility and advantages of a 2-step process for the biological treatment of sulfidic spent caustics under halo-alkaline conditions (i.e. pH 9.5; Na(+) = 0.8 M). Experiments with synthetically prepared solutions were performed in a continuously fed system consisting of two gas-lift reactors in series operated at aerobic conditions at 35 °C. The detoxification of sulfide to thiosulfate in the first step allowed the successful biological treatment of total-S loading rates up to 33 mmol L(-1) day(-1). In the second, biological step, the remaining sulfide and thiosulfate was completely converted to sulfate by haloalkaliphilic sulfide oxidizing bacteria. Mathematical modeling of the 2-step process shows that under the prevailing conditions an optimal reactor configuration consists of 40% 'abiotic' and 60% 'biological' volume, whilst the total reactor volume is 22% smaller than for the 1-step process. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. A prototype software methodology for the rapid evaluation of biomanufacturing process options.

    PubMed

    Chhatre, Sunil; Francis, Richard; O'Donovan, Kieran; Titchener-Hooker, Nigel J; Newcombe, Anthony R; Keshavarz-Moore, Eli

    2007-10-01

    A three-layered simulation methodology is described that rapidly evaluates biomanufacturing process options. In each layer, inferior options are screened out, while more promising candidates are evaluated further in the subsequent, more refined layer, which uses more rigorous models that require more data from time-consuming experimentation. Screening ensures laboratory studies are focused only on options showing the greatest potential. To simplify the screening, outputs of production level, cost and time are combined into a single value using multi-attribute-decision-making techniques. The methodology was illustrated by evaluating alternatives to an FDA (U.S. Food and Drug Administration)-approved process manufacturing rattlesnake antivenom. Currently, antivenom antibodies are recovered from ovine serum by precipitation/centrifugation and proteolyzed before chromatographic purification. Alternatives included increasing the feed volume, replacing centrifugation with microfiltration and replacing precipitation/centrifugation with a Protein G column. The best alternative used a higher feed volume and a Protein G step. By rapidly evaluating the attractiveness of options, the methodology facilitates efficient and cost-effective process development.

  7. Simulation of streamflow, evapotranspiration, and groundwater recharge in the Lower Frio River watershed, south Texas, 1961-2008

    USGS Publications Warehouse

    Lizarraga, Joy S.; Ockerman, Darwin J.

    2011-01-01

    The U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, Fort Worth District; the City of Corpus Christi; the Guadalupe-Blanco River Authority; the San Antonio River Authority; and the San Antonio Water System, configured, calibrated, and tested a watershed model for a study area consisting of about 5,490 mi2 of the Frio River watershed in south Texas. The purpose of the model is to contribute to the understanding of watershed processes and hydrologic conditions in the lower Frio River watershed. The model simulates streamflow, evapotranspiration (ET), and groundwater recharge by using a numerical representation of physical characteristics of the landscape, and meteorological and streamflow data. Additional time-series inputs to the model include wastewater-treatment-plant discharges, surface-water withdrawals, and estimated groundwater inflow from Leona Springs. Model simulations of streamflow, ET, and groundwater recharge were done for various periods of record depending upon available measured data for input and comparison, starting as early as 1961. Because of the large size of the study area, the lower Frio River watershed was divided into 12 subwatersheds; separate Hydrological Simulation Program-FORTRAN models were developed for each subwatershed. Simulation of the overall study area involved running simulations in downstream order. Output from the model was summarized by subwatershed, point locations, reservoir reaches, and the Carrizo-Wilcox aquifer outcrop. Four long-term U.S. Geological Survey streamflow-gaging stations and two short-term streamflow-gaging stations were used for streamflow model calibration and testing with data from 1991-2008. Calibration was based on data from 2000-08, and testing was based on data from 1991-99. Choke Canyon Reservoir stage data from 1992-2008 and monthly evaporation estimates from 1999-2008 also were used for model calibration. Additionally, 2006-08 ET data from a U.S. Geological Survey meteorological station in Medina County were used for calibration. Streamflow and ET calibration were considered good or very good. For the 2000-08 calibration period, total simulated flow volume and the flow volume of the highest 10 percent of simulated daily flows were calibrated to within about 10 percent of measured volumes at six U.S. Geological Survey streamflow-gaging stations. The flow volume of the lowest 50 percent of daily flows was not simulated as accurately but represented a small percent of the total flow volume. The model-fit efficiency for the weekly mean streamflow during the calibration periods ranged from 0.60 to 0.91, and the root mean square error ranged from 16 to 271 percent of the mean flow rate. The simulated total flow volumes during the testing periods at the long-term gaging stations exceeded the measured total flow volumes by approximately 22 to 50 percent at three stations and were within 7 percent of the measured total flow volumes at one station. For the longer 1961-2008 simulation period at the long-term stations, simulated total flow volumes were within about 3 to 18 percent of measured total flow volumes. The calibrations made by using Choke Canyon reservoir volume for 1992-2008, reservoir evaporation for 1999-2008, and ET in Medina County for 2006-08, are considered very good. Model limitations include possible errors related to model conceptualization and parameter variability, lack of data to better quantify certain model inputs, and measurement errors. Uncertainty regarding the degree to which available rainfall data represent actual rainfall is potentially the most serious source of measurement error. A sensitivity analysis was performed for the Upper San Miguel subwatershed model to show the effect of changes to model parameters on the estimated mean recharge, ET, and surface runoff from that part of the Carrizo-Wilcox aquifer outcrop. Simulated recharge was most sensitive to the changes in the lower-zone ET (LZ

  8. On the thermodynamics of the photoacoustic effect of condensed matter in gas cells

    NASA Astrophysics Data System (ADS)

    Korpiun, P.; Büchner, B.

    1983-03-01

    The photoacoustic (PA) effect of condensed matter measured in a gas-microphone cell can be interpreted by the Rosencwaig-Gersho-model. This model developed originally for thermally thick gas columns is extended to arbitrary gas lengths. The periodic variation of temperature varies the internal energy of the total volume of the gas leading to a pressure oscillation by an isochoric process. Further, taking into account a residual volume as introduced by Tam and Wong, the description leads finally to an extended Rosencwaig-Gersho model (ERG). Measurements with argon (γ=1.67) and Freon 13 (CClF3, γ=1.17) for thermally thin and thick gas colomns confirm the isochoric character of the PA effect at frequencies far below the acoustic cell resonance. Experimental results of other groups can be interpreted very well with our model. Furthermore, the extended Rosencwaig-Gershomodel leads just in the low frequency region to the same results as the model of McDonald and Wetsel.

  9. FIRST ORDER KINETIC GAS GENERATION MODEL PARAMETERS FOR WET LANDFILLS

    EPA Science Inventory

    Landfill gas is produced as a result of a sequence of physical, chemical, and biological processes occurring within an anaerobic landfill. Landfill operators, energy recovery project owners, regulators, and energy users need to be able to project the volume of gas produced and re...

  10. CD volume design and verification

    NASA Technical Reports Server (NTRS)

    Li, Y. P.; Hughes, J. S.

    1993-01-01

    In this paper, we describe a prototype for CD-ROM volume design and verification. This prototype allows users to create their own model of CD volumes by modifying a prototypical model. Rule-based verification of the test volumes can then be performed later on against the volume definition. This working prototype has proven the concept of model-driven rule-based design and verification for large quantity of data. The model defined for the CD-ROM volumes becomes a data model as well as an executable specification.

  11. 3D patient-specific models for left atrium characterization to support ablation in atrial fibrillation patients.

    PubMed

    Valinoti, Maddalena; Fabbri, Claudio; Turco, Dario; Mantovan, Roberto; Pasini, Antonio; Corsi, Cristiana

    2018-01-01

    Radiofrequency ablation (RFA) is an important and promising therapy for atrial fibrillation (AF) patients. Optimization of patient selection and the availability of an accurate anatomical guide could improve RFA success rate. In this study we propose a unified, fully automated approach to build a 3D patient-specific left atrium (LA) model including pulmonary veins (PVs) in order to provide an accurate anatomical guide during RFA and without PVs in order to characterize LA volumetry and support patient selection for AF ablation. Magnetic resonance data from twenty-six patients referred for AF RFA were processed applying an edge-based level set approach guided by a phase-based edge detector to obtain the 3D LA model with PVs. An automated technique based on the shape diameter function was designed and applied to remove PVs and compute LA volume. 3D LA models were qualitatively compared with 3D LA surfaces acquired during the ablation procedure. An expert radiologist manually traced the LA on MR images twice. LA surfaces from the automatic approach and manual tracing were compared by mean surface-to-surface distance. In addition, LA volumes were compared with volumes from manual segmentation by linear and Bland-Altman analyses. Qualitative comparison of 3D LA models showed several inaccuracies, in particular PVs reconstruction was not accurate and left atrial appendage was missing in the model obtained during RFA procedure. LA surfaces were very similar (mean surface-to-surface distance: 2.3±0.7mm). LA volumes were in excellent agreement (y=1.03x-1.4, r=0.99, bias=-1.37ml (-1.43%) SD=2.16ml (2.3%), mean percentage difference=1.3%±2.1%). Results showed the proposed 3D patient-specific LA model with PVs is able to better describe LA anatomy compared to models derived from the navigation system, thus potentially improving electrograms and voltage information location and reducing fluoroscopic time during RFA. Quantitative assessment of LA volume derived from our 3D LA model without PVs is also accurate and may provide important information for patient selection for RFA. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Polar Processes in a 50-year Simulation of Stratospheric Chemistry and Transport

    NASA Technical Reports Server (NTRS)

    Kawa, S.R.; Douglass, A. R.; Patrick, L. C.; Allen, D. R.; Randall, C. E.

    2004-01-01

    The unique chemical, dynamical, and microphysical processes that occur in the winter polar lower stratosphere are expected to interact strongly with changing climate and trace gas abundances. Significant changes in ozone have been observed and prediction of future ozone and climate interactions depends on modeling these processes successfully. We have conducted an off-line model simulation of the stratosphere for trace gas conditions representative of 1975-2025 using meteorology from the NASA finite-volume general circulation model. The objective of this simulation is to examine the sensitivity of stratospheric ozone and chemical change to varying meteorology and trace gas inputs. This presentation will examine the dependence of ozone and related processes in polar regions on the climatological and trace gas changes in the model. The model past performance is base-lined against available observations, and a future ozone recovery scenario is forecast. Overall the model ozone simulation is quite realistic, but initial analysis of the detailed evolution of some observable processes suggests systematic shortcomings in our description of the polar chemical rates and/or mechanisms. Model sensitivities, strengths, and weaknesses will be discussed with implications for uncertainty and confidence in coupled climate chemistry predictions.

  13. Onset of multiple sclerosis before adulthood leads to failure of age-expected brain growth

    PubMed Central

    Aubert-Broche, Bérengère; Fonov, Vladimir; Narayanan, Sridar; Arnold, Douglas L.; Araujo, David; Fetco, Dumitru; Till, Christine; Sled, John G.; Collins, D. Louis

    2014-01-01

    Objective: To determine the impact of pediatric-onset multiple sclerosis (MS) on age-expected brain growth. Methods: Whole brain and regional volumes of 36 patients with relapsing-remitting MS onset prior to 18 years of age were segmented in 185 longitudinal MRI scans (2–11 scans per participant, 3-month to 2-year scan intervals). MRI scans of 25 age- and sex-matched healthy normal controls (NC) were also acquired at baseline and 2 years later on the same scanner as the MS group. A total of 874 scans from 339 participants from the NIH-funded MRI study of normal brain development acquired at 2-year intervals were used as an age-expected healthy growth reference. All data were analyzed with an automatic image processing pipeline to estimate the volume of brain and brain substructures. Mixed-effect models were built using age, sex, and group as fixed effects. Results: Significant group and age interactions were found with the adjusted models fitting brain volumes and normalized thalamus volumes (p < 10−4). These findings indicate a failure of age-normative brain growth for the MS group, and an even greater failure of thalamic growth. In patients with MS, T2 lesion volume correlated with a greater reduction in age-expected thalamic volume. To exclude any scanner-related influence on our data, we confirmed no significant interaction of group in the adjusted models between the NC and NIH MRI Study of Normal Brain Development groups. Conclusions: Our results provide evidence that the onset of MS during childhood and adolescence limits age-expected primary brain growth and leads to subsequent brain atrophy, implicating an early onset of the neurodegenerative aspect of MS. PMID:25378667

  14. Volumes and bulk densities of forty asteroids from ADAM shape modeling

    NASA Astrophysics Data System (ADS)

    Hanuš, J.; Viikinkoski, M.; Marchis, F.; Ďurech, J.; Kaasalainen, M.; Delbo', M.; Herald, D.; Frappa, E.; Hayamizu, T.; Kerr, S.; Preston, S.; Timerson, B.; Dunham, D.; Talbot, J.

    2017-05-01

    Context. Disk-integrated photometric data of asteroids do not contain accurate information on shape details or size scale. Additional data such as disk-resolved images or stellar occultation measurements further constrain asteroid shapes and allow size estimates. Aims: We aim to use all the available disk-resolved images of approximately forty asteroids obtained by the Near-InfraRed Camera (Nirc2) mounted on the W.M. Keck II telescope together with the disk-integrated photometry and stellar occultation measurements to determine their volumes. We can then use the volume, in combination with the known mass, to derive the bulk density. Methods: We downloaded and processed all the asteroid disk-resolved images obtained by the Nirc2 that are available in the Keck Observatory Archive (KOA). We combined optical disk-integrated data and stellar occultation profiles with the disk-resolved images and use the All-Data Asteroid Modeling (ADAM) algorithm for the shape and size modeling. Our approach provides constraints on the expected uncertainty in the volume and size as well. Results: We present shape models and volume for 41 asteroids. For 35 of these asteroids, the knowledge of their mass estimates from the literature allowed us to derive their bulk densities. We see a clear trend of lower bulk densities for primitive objects (C-complex) and higher bulk densities for S-complex asteroids. The range of densities in the X-complex is large, suggesting various compositions. We also identified a few objects with rather peculiar bulk densities, which is likely a hint of their poor mass estimates. Asteroid masses determined from the Gaia astrometric observations should further refine most of the density estimates.

  15. Disposition of pentachlorophenol in rainbow trout (Salmo gairdneri ): Effect of inhibition of metabolism

    USGS Publications Warehouse

    Stehly, G.R.; Hayton, W.L.

    1989-01-01

    The accumulation kinetics of pentachlorophenol (PCP) were investigated in rainbow trout (Salmo gairdneri ) in the absence and presence of 25 mg/l salicylamide, an inhibitor of PCP metabolism. After exposure to 5 mu g/l PCP over 1-96 h, the amount of PCP in the whole fish, its concentration in water and the total amount of metabolites (water, whole fish and bile) were measured. Equations for these variables, based on a two compartment pharmacokinetic model, were fitted simultaneously to the data using the computer program NONLIN, which uses an iterative nonlinear least squares technique. Salicylamide decreased the metabolic clearance of PCP, which resulted in an increase in the bioconcentration factor (BCF); this increase was partially offset by a salicylamide-induced decrease in the apparent volume of distribution of PCP. A clearance-volume compartment model permitted partitioning of the BCF in terms of the underlying physiologic and biochemical processes (uptake clearance, metabolic clearance and apparent volume of distribution).

  16. A micromechanical interpretation of the temperature dependence of Beremin model parameters for french RPV steel

    NASA Astrophysics Data System (ADS)

    Mathieu, Jean-Philippe; Inal, Karim; Berveiller, Sophie; Diard, Olivier

    2010-11-01

    Local approach to brittle fracture for low-alloyed steels is discussed in this paper. A bibliographical introduction intends to highlight general trends and consensual points of the topic and evokes debatable aspects. French RPV steel 16MND5 (equ. ASTM A508 Cl.3), is then used as a model material to study the influence of temperature on brittle fracture. A micromechanical modelling of brittle fracture at the elementary volume scale already used in previous work is then recalled. It involves a multiscale modelling of microstructural plasticity which has been tuned on experimental inter-phase and inter-granular stresses heterogeneities measurements. Fracture probability of the elementary volume can then be computed using a randomly attributed defect size distribution based on realistic carbides repartition. This defect distribution is then deterministically correlated to stress heterogeneities simulated within the microstructure using a weakest-link hypothesis on the elementary volume, which results in a deterministic stress to fracture. Repeating the process allows to compute Weibull parameters on the elementary volume. This tool is then used to investigate the physical mechanisms that could explain the already experimentally observed temperature dependence of Beremin's parameter for 16MND5 steel. It is showed that, assuming that the hypothesis made in this work about cleavage micro-mechanisms are correct, effective equivalent surface energy (i.e. surface energy plus plastically dissipated energy when blunting the crack tip) for propagating a crack has to be temperature dependent to explain Beremin's parameters temperature evolution.

  17. Physics-based interactive volume manipulation for sharing surgical process.

    PubMed

    Nakao, Megumi; Minato, Kotaro

    2010-05-01

    This paper presents a new set of techniques by which surgeons can interactively manipulate patient-specific volumetric models for sharing surgical process. To handle physical interaction between the surgical tools and organs, we propose a simple surface-constraint-based manipulation algorithm to consistently simulate common surgical manipulations such as grasping, holding and retraction. Our computation model is capable of simulating soft-tissue deformation and incision in real time. We also present visualization techniques in order to rapidly visualize time-varying, volumetric information on the deformed image. This paper demonstrates the success of the proposed methods in enabling the simulation of surgical processes, and the ways in which this simulation facilitates preoperative planning and rehearsal.

  18. Numerical Simulation of the Working Process in the Twin Screw Vacuum Pump

    NASA Astrophysics Data System (ADS)

    Lu, Yang; Fu, Yu; Guo, Bei; Fu, Lijuan; Zhang, Qingqing; Chen, Xiaole

    2017-08-01

    Twin screw vacuum pumps inherit the advantages of screw machinery, such as high reliability, stable medium conveying, small vibration, simple and compact structures, convenient operation, etc, which have been widely used in petrochemical and air industry. On the basis of previous studies, this study analyzed the geometric features of variable pitch of the twin screw vacuum pump such as the sealing line, the meshing line and the volume between teeth. The mathematical model of numerical simulation of the twin screw vacuum pump was established. The leakage paths of the working volume including the sealing line and the addendum arc were comprehensively considered. The corresponding simplified geometric model of leakage flow was built up for different leak paths and the flow coefficients were calculated. The flow coefficient value range of different leak paths was given. The results showed that the flow coefficient of different leak paths can be taken as constant value for the studied geometry. The analysis of recorded indicator diagrams showed that the increasing rotational speed can dramatically decrease the exhaust pressure and the lower rotational speed can lead to over-compression. The pressure of the isentropic process which was affected by leakage was higher than the theoretical process.

  19. A Volterra series-based method for extracting target echoes in the seafloor mining environment.

    PubMed

    Zhao, Haiming; Ji, Yaqian; Hong, Yujiu; Hao, Qi; Ma, Liyong

    2016-09-01

    The purpose of this research was to evaluate the applicability of the Volterra adaptive method to predict the target echo of an ultrasonic signal in an underwater seafloor mining environment. There is growing interest in mining of seafloor minerals because they offer an alternative source of rare metals. Mining the minerals cause the seafloor sediments to be stirred up and suspended in sea water. In such an environment, the target signals used for seafloor mapping are unable to be detected because of the unavoidable presence of volume reverberation induced by the suspended sediments. The detection of target signals in reverberation is currently performed using a stochastic model (for example, the autoregressive (AR) model) based on the statistical characterisation of reverberation. However, we examined a new method of signal detection in volume reverberation based on the Volterra series by confirming that the reverberation is a chaotic signal and generated by a deterministic process. The advantage of this method over the stochastic model is that attributions of the specific physical process are considered in the signal detection problem. To test the Volterra series based method and its applicability to target signal detection in the volume reverberation environment derived from the seafloor mining process, we simulated the real-life conditions of seafloor mining in a water filled tank of dimensions of 5×3×1.8m. The bottom of the tank was covered with 10cm of an irregular sand layer under which 5cm of an irregular cobalt-rich crusts layer was placed. The bottom was interrogated by an acoustic wave generated as 16μs pulses of 500kHz frequency. This frequency is demonstrated to ensure a resolution on the order of one centimetre, which is adequate in exploration practice. Echo signals were collected with a data acquisition card (PCI 1714 UL, 12-bit). Detection of the target echo in these signals was performed by both the Volterra series based model and the AR model. The results obtained confirm that the Volterra series based method is more efficient in the detection of the signal in reverberation than the conventional AR model (the accuracy is 80% for the PIM-Volterra prediction model versus 40% for the AR model). Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Applications and Improvement of a Coupled, Global and Cloud-Resolving Modeling System

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Chern, J.; Atlas, R.

    2005-01-01

    Recently Grabowski (2001) and Khairoutdinov and Randall (2001) have proposed the use of 2D CFWs as a "super parameterization" [or multi-scale modeling framework (MMF)] to represent cloud processes within atmospheric general circulation models (GCMs). In the MMF, a fine-resolution 2D CRM takes the place of the single-column parameterization used in conventional GCMs. A prototype Goddard MMF based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM) is now being developed. The prototype includes the fvGCM run at 2.50 x 20 horizontal resolution with 32 vertical layers from the surface to 1 mb and the 2D (x-z) GCE using 64 horizontal and 32 vertical grid points with 4 km horizontal resolution and a cyclic lateral boundary. The time step for the 2D GCE would be 15 seconds, and the fvGCM-GCE coupling frequency would be 30 minutes (i.e. the fvGCM physical time step). We have successfully developed an fvGCM-GCE coupler for this prototype. Because the vertical coordinate of the fvGCM (a terrain-following floating Lagrangian coordinate) is different from that of the GCE (a z coordinate), vertical interpolations between the two coordinates are needed in the coupler. In interpolating fields from the GCE to fvGCM, we use an existing fvGCM finite- volume piecewise parabolic mapping (PPM) algorithm, which conserves the mass, momentum, and total energy. A new finite-volume PPM algorithm, which conserves the mass, momentum and moist static energy in the z coordinate, is being developed for interpolating fields from the fvGCM to the GCE. In the meeting, we will discuss the major differences between the two MMFs (i.e., the CSU MMF and the Goddard MMF). We will also present performance and critical issues related to the MMFs. In addition, we will present multi-dimensional cloud datasets (i.e., a cloud data library) generated by the Goddard MMF that will be provided to the global modeling community to help improve the representation and performance of moist processes in climate models and to improve our understanding of cloud processes globally (the software tools needed to produce cloud statistics and to identify various types of clouds and cloud systems from both high-resolution satellite and model data will be also presented).

  1. Comparison of measured and modelled negative hydrogen ion densities at the ECR-discharge HOMER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauner, D.; Kurutz, U.; Fantz, U.

    2015-04-08

    As the negative hydrogen ion density n{sub H{sup −}} is a key parameter for the investigation of negative ion sources, its diagnostic quantification is essential in source development and operation as well as for fundamental research. By utilizing the photodetachment process of negative ions, generally two different diagnostic methods can be applied: via laser photodetachment, the density of negative ions is measured locally, but only relatively to the electron density. To obtain absolute densities, the electron density has to be measured additionally, which induces further uncertainties. Via cavity ring-down spectroscopy (CRDS), the absolute density of H{sup −} is measured directly,more » however LOS-averaged over the plasma length. At the ECR-discharge HOMER, where H{sup −} is produced in the plasma volume, laser photodetachment is applied as the standard method to measure n{sub H{sup −}}. The additional application of CRDS provides the possibility to directly obtain absolute values of n{sub H{sup −}}, thereby successfully bench-marking the laser photodetachment system as both diagnostics are in good agreement. In the investigated pressure range from 0.3 to 3 Pa, the measured negative hydrogen ion density shows a maximum at 1 to 1.5 Pa and an approximately linear response to increasing input microwave powers from 200 up to 500 W. Additionally, the volume production of negative ions is 0-dimensionally modelled by balancing H{sup −} production and destruction processes. The modelled densities are adapted to the absolute measurements of n{sub H{sup −}} via CRDS, allowing to identify collisions of H{sup −} with hydrogen atoms (associative and non-associative detachment) to be the dominant loss process of H{sup −} in the plasma volume at HOMER. Furthermore, the characteristic peak of n{sub H{sup −}} observed at 1 to 1.5 Pa is identified to be caused by a comparable behaviour of the electron density with varying pressure, as n{sub e} determines the volume production rate via dissociative electron attachment to vibrationally excited hydrogen molecules.« less

  2. High-resolution marine flood modelling coupling overflow and overtopping processes: framing the hazard based on historical and statistical approaches

    NASA Astrophysics Data System (ADS)

    Nicolae Lerma, Alexandre; Bulteau, Thomas; Elineau, Sylvain; Paris, François; Durand, Paul; Anselme, Brice; Pedreros, Rodrigo

    2018-01-01

    A modelling chain was implemented in order to propose a realistic appraisal of the risk in coastal areas affected by overflowing as well as overtopping processes. Simulations are performed through a nested downscaling strategy from regional to local scale at high spatial resolution with explicit buildings, urban structures such as sea front walls and hydraulic structures liable to affect the propagation of water in urban areas. Validation of the model performance is based on hard and soft available data analysis and conversion of qualitative to quantitative information to reconstruct the area affected by flooding and the succession of events during two recent storms. Two joint probability approaches (joint exceedance contour and environmental contour) are used to define 100-year offshore conditions scenarios and to investigate the flood response to each scenario in terms of (1) maximum spatial extent of flooded areas, (2) volumes of water propagation inland and (3) water level in flooded areas. Scenarios of sea level rise are also considered in order to evaluate the potential hazard evolution. Our simulations show that for a maximising 100-year hazard scenario, for the municipality as a whole, 38 % of the affected zones are prone to overflow flooding and 62 % to flooding by propagation of overtopping water volume along the seafront. Results also reveal that for the two kinds of statistic scenarios a difference of about 5 % in the forcing conditions (water level, wave height and period) can produce significant differences in terms of flooding like +13.5 % of water volumes propagating inland or +11.3 % of affected surfaces. In some areas, flood response appears to be very sensitive to the chosen scenario with differences of 0.3 to 0.5 m in water level. The developed approach enables one to frame the 100-year hazard and to characterize spatially the robustness or the uncertainty over the results. Considering a 100-year scenario with mean sea level rise (0.6 m), hazard characteristics are dramatically changed with an evolution of the overtopping / overflowing process ratio and an increase of a factor 4.84 in volumes of water propagating inland and 3.47 in flooded surfaces.

  3. Development and evaluation of P/M processing techniques to improve and control the mechanical properties of metal injection molded parts

    NASA Astrophysics Data System (ADS)

    Sago, James Alan

    Metal Injection Molding (MIM) is one of the most rapidly growing areas of powder metallurgy (P/M) but the growth of MIM into new markets and more demanding applications is limited by two fundamental barriers, the availability of low cost metal powders and a lack of knowledge and understanding of how mechanical properties, especially toughness, are affected by the many parameters in the MIM process. The goals of this study were to investigate solutions to these challenges for MIM. Mechanical alloying (MA) is a technique which can produce a wide variety of powder compositions in a size range suited to MIM and in smaller batches. However MA typically suffers from low production volumes and long milling times. This study will show that a saucer mill can produce sizable volumes of MA powders in times typically less than an hour. The MA process was also used to produce powders of 17-4PH stainless steel and the NiTi shape memory alloy for a MIM feedstock. This study shows that the MA powder characteristics led to successful MIM processing of parts. Previous studies have shown that the toughness of individual MIM parts can vary widely within a single production run and from one producer to another. In the last part of the study a Design of Experiments (DOE) approach was used to evaluate the effects of MIM processing parameters on the mechanical properties. Analysis of Variance produced mathematical models for Charpy impact toughness, hardness, density, and carbon content. Tensile properties did not produce a good model due to processing problems. The models and recommendations for improving both toughness and reproducibility of toughness are presented.

  4. Codeformation processing of mechanically-dissimilar metal/intermetallic composites

    NASA Astrophysics Data System (ADS)

    Marte, Judson Sloan

    A systematic and scientific approach has been applied to the study of codeformation processing. A series of composites having mechanically-dissimilar phases were developed in which the high temperature flow behavior of the reinforcement material could be varied independent of the matrix. This was accomplished through the use of a series of intermetallic matrix composites (IMCs) as discontinuous reinforcements in an otherwise conventional metal matrix composite. The IMCs are produced using an in-situ reaction synthesis technique, called the XD(TM) process. The temperature of the exothermic synthesis reaction, called the adiabatic temperature, has been calculated and shown to increase with increasing volume percentage of TiB2 reinforcement. Further, this temperature has been shown to effect the size and spacing of the TiB2, microstructural features which are often used in discontinuous composite strength models. Study of the high temperature flow behavior of the components of the metal/IMC composite is critical to the development of an understanding of codeformation. A series of compression tests performed at 1000° to 1200°C and strain-rates of 10-3 and 10-4 sec-1. Peak flow stresses were used to evaluate the influence of material properties and process conditions. These data were incorporated into phenomenologically-based constitutive equations that have been used to predict the flow behavior. It has been determined that plastic deformation of the IMCs occurs readily, and is largely TiB2 independent, at temperatures approaching the melting point of the intermetallic matrices. Ti-6Al-4V/IMC powder blends were extruded at high temperatures to achieve commensurately deformed microstructures. The results of codeformation processing were analyzed in terms of the plastic strain of the IMC particulates. IMC particle deformation was shown to increase with increasing IMC particle size, volume percentage of IMC, extrusion temperature, homologous temperature, extrusion strain-rate, and decreasing TiB2 reinforcement within the IMCs. A series of finite element models were developed to simulate codeformation processing via the extrusion of a discontinuously-reinforced composite. The results were evaluated through comparison between average equivalent strain in matrix and reinforcement elements. These results show that codeformation should increase with increasing volume percentage of IMC, homologous temperature, volume percentage of IMC, and decreasing IMC particle size. With the exception of the particle size, these results correlate to those of the experimental extrusion analysis.

  5. NeuroPigPen: A Scalable Toolkit for Processing Electrophysiological Signal Data in Neuroscience Applications Using Apache Pig

    PubMed Central

    Sahoo, Satya S.; Wei, Annan; Valdez, Joshua; Wang, Li; Zonjy, Bilal; Tatsuoka, Curtis; Loparo, Kenneth A.; Lhatoo, Samden D.

    2016-01-01

    The recent advances in neurological imaging and sensing technologies have led to rapid increase in the volume, rate of data generation, and variety of neuroscience data. This “neuroscience Big data” represents a significant opportunity for the biomedical research community to design experiments using data with greater timescale, large number of attributes, and statistically significant data size. The results from these new data-driven research techniques can advance our understanding of complex neurological disorders, help model long-term effects of brain injuries, and provide new insights into dynamics of brain networks. However, many existing neuroinformatics data processing and analysis tools were not built to manage large volume of data, which makes it difficult for researchers to effectively leverage this available data to advance their research. We introduce a new toolkit called NeuroPigPen that was developed using Apache Hadoop and Pig data flow language to address the challenges posed by large-scale electrophysiological signal data. NeuroPigPen is a modular toolkit that can process large volumes of electrophysiological signal data, such as Electroencephalogram (EEG), Electrocardiogram (ECG), and blood oxygen levels (SpO2), using a new distributed storage model called Cloudwave Signal Format (CSF) that supports easy partitioning and storage of signal data on commodity hardware. NeuroPigPen was developed with three design principles: (a) Scalability—the ability to efficiently process increasing volumes of data; (b) Adaptability—the toolkit can be deployed across different computing configurations; and (c) Ease of programming—the toolkit can be easily used to compose multi-step data processing pipelines using high-level programming constructs. The NeuroPigPen toolkit was evaluated using 750 GB of electrophysiological signal data over a variety of Hadoop cluster configurations ranging from 3 to 30 Data nodes. The evaluation results demonstrate that the toolkit is highly scalable and adaptable, which makes it suitable for use in neuroscience applications as a scalable data processing toolkit. As part of the ongoing extension of NeuroPigPen, we are developing new modules to support statistical functions to analyze signal data for brain connectivity research. In addition, the toolkit is being extended to allow integration with scientific workflow systems. NeuroPigPen is released under BSD license at: https://sites.google.com/a/case.edu/neuropigpen/. PMID:27375472

  6. NeuroPigPen: A Scalable Toolkit for Processing Electrophysiological Signal Data in Neuroscience Applications Using Apache Pig.

    PubMed

    Sahoo, Satya S; Wei, Annan; Valdez, Joshua; Wang, Li; Zonjy, Bilal; Tatsuoka, Curtis; Loparo, Kenneth A; Lhatoo, Samden D

    2016-01-01

    The recent advances in neurological imaging and sensing technologies have led to rapid increase in the volume, rate of data generation, and variety of neuroscience data. This "neuroscience Big data" represents a significant opportunity for the biomedical research community to design experiments using data with greater timescale, large number of attributes, and statistically significant data size. The results from these new data-driven research techniques can advance our understanding of complex neurological disorders, help model long-term effects of brain injuries, and provide new insights into dynamics of brain networks. However, many existing neuroinformatics data processing and analysis tools were not built to manage large volume of data, which makes it difficult for researchers to effectively leverage this available data to advance their research. We introduce a new toolkit called NeuroPigPen that was developed using Apache Hadoop and Pig data flow language to address the challenges posed by large-scale electrophysiological signal data. NeuroPigPen is a modular toolkit that can process large volumes of electrophysiological signal data, such as Electroencephalogram (EEG), Electrocardiogram (ECG), and blood oxygen levels (SpO2), using a new distributed storage model called Cloudwave Signal Format (CSF) that supports easy partitioning and storage of signal data on commodity hardware. NeuroPigPen was developed with three design principles: (a) Scalability-the ability to efficiently process increasing volumes of data; (b) Adaptability-the toolkit can be deployed across different computing configurations; and (c) Ease of programming-the toolkit can be easily used to compose multi-step data processing pipelines using high-level programming constructs. The NeuroPigPen toolkit was evaluated using 750 GB of electrophysiological signal data over a variety of Hadoop cluster configurations ranging from 3 to 30 Data nodes. The evaluation results demonstrate that the toolkit is highly scalable and adaptable, which makes it suitable for use in neuroscience applications as a scalable data processing toolkit. As part of the ongoing extension of NeuroPigPen, we are developing new modules to support statistical functions to analyze signal data for brain connectivity research. In addition, the toolkit is being extended to allow integration with scientific workflow systems. NeuroPigPen is released under BSD license at: https://sites.google.com/a/case.edu/neuropigpen/.

  7. Dynamic contrast-enhanced CT of head and neck tumors: perfusion measurements using a distributed-parameter tracer kinetic model. Initial results and comparison with deconvolution-based analysis

    NASA Astrophysics Data System (ADS)

    Bisdas, Sotirios; Konstantinou, George N.; Sherng Lee, Puor; Thng, Choon Hua; Wagenblast, Jens; Baghi, Mehran; San Koh, Tong

    2007-10-01

    The objective of this work was to evaluate the feasibility of a two-compartment distributed-parameter (DP) tracer kinetic model to generate functional images of several physiologic parameters from dynamic contrast-enhanced CT data obtained of patients with extracranial head and neck tumors and to compare the DP functional images to those obtained by deconvolution-based DCE-CT data analysis. We performed post-processing of DCE-CT studies, obtained from 15 patients with benign and malignant head and neck cancer. We introduced a DP model of the impulse residue function for a capillary-tissue exchange unit, which accounts for the processes of convective transport and capillary-tissue exchange. The calculated parametric maps represented blood flow (F), intravascular blood volume (v1), extravascular extracellular blood volume (v2), vascular transit time (t1), permeability-surface area product (PS), transfer ratios k12 and k21, and the fraction of extracted tracer (E). Based on the same regions of interest (ROI) analysis, we calculated the tumor blood flow (BF), blood volume (BV) and mean transit time (MTT) by using a modified deconvolution-based analysis taking into account the extravasation of the contrast agent for PS imaging. We compared the corresponding values by using Bland-Altman plot analysis. We outlined 73 ROIs including tumor sites, lymph nodes and normal tissue. The Bland-Altman plot analysis revealed that the two methods showed an accepted degree of agreement for blood flow, and, thus, can be used interchangeably for measuring this parameter. Slightly worse agreement was observed between v1 in the DP model and BV but even here the two tracer kinetic analyses can be used interchangeably. Under consideration of whether both techniques may be used interchangeably was the case of t1 and MTT, as well as for measurements of the PS values. The application of the proposed DP model is feasible in the clinical routine and it can be used interchangeably for measuring blood flow and vascular volume with the commercially available reference standard of the deconvolution-based approach. The lack of substantial agreement between the measurements of vascular transit time and permeability-surface area product may be attributed to the different tracer kinetic principles employed by both models and the detailed capillary tissue exchange physiological modeling of the DP technique.

  8. Effects of reservoir heterogeneity on scaling of effective mass transfer coefficient for solute transport

    NASA Astrophysics Data System (ADS)

    Leung, Juliana Y.; Srinivasan, Sanjay

    2016-09-01

    Modeling transport process at large scale requires proper scale-up of subsurface heterogeneity and an understanding of its interaction with the underlying transport mechanisms. A technique based on volume averaging is applied to quantitatively assess the scaling characteristics of effective mass transfer coefficient in heterogeneous reservoir models. The effective mass transfer coefficient represents the combined contribution from diffusion and dispersion to the transport of non-reactive solute particles within a fluid phase. Although treatment of transport problems with the volume averaging technique has been published in the past, application to geological systems exhibiting realistic spatial variability remains a challenge. Previously, the authors developed a new procedure where results from a fine-scale numerical flow simulation reflecting the full physics of the transport process albeit over a sub-volume of the reservoir are integrated with the volume averaging technique to provide effective description of transport properties. The procedure is extended such that spatial averaging is performed at the local-heterogeneity scale. In this paper, the transport of a passive (non-reactive) solute is simulated on multiple reservoir models exhibiting different patterns of heterogeneities, and the scaling behavior of effective mass transfer coefficient (Keff) is examined and compared. One such set of models exhibit power-law (fractal) characteristics, and the variability of dispersion and Keff with scale is in good agreement with analytical expressions described in the literature. This work offers an insight into the impacts of heterogeneity on the scaling of effective transport parameters. A key finding is that spatial heterogeneity models with similar univariate and bivariate statistics may exhibit different scaling characteristics because of the influence of higher order statistics. More mixing is observed in the channelized models with higher-order continuity. It reinforces the notion that the flow response is influenced by the higher-order statistical description of heterogeneity. An important implication is that when scaling-up transport response from lab-scale results to the field scale, it is necessary to account for the scale-up of heterogeneity. Since the characteristics of higher-order multivariate distributions and large-scale heterogeneity are typically not captured in small-scale experiments, a reservoir modeling framework that captures the uncertainty in heterogeneity description should be adopted.

  9. Design and optimization of hot-filling pasteurization conditions: Cupuaçu (Theobroma grandiflorum) fruit pulp case study.

    PubMed

    Silva, Filipa V M; Martins, Rui C; Silva, Cristina L M

    2003-01-01

    Cupuaçu (Theobroma grandiflorum) is an Amazonian tropical fruit with a great economic potential. Pasteurization, by a hot-filling technique, was suggested for the preservation of this fruit pulp at room temperature. The process was implemented with local communities in Brazil. The process was modeled, and a computer program was written in Turbo Pascal. The relative importance among the pasteurization process variables (initial product temperature, heating rate, holding temperature and time, container volume and shape, cooling medium type and temperature) on the microbial target and quality was investigated, by performing simulations according to a screening factorial design. Afterward, simulations of the different processing conditions were carried out. The holding temperature (T(F)) and time (t(hold)) affected pasteurization value (P), and the container volume (V) influenced largely the quality parameters. The process was optimized for retail (1 L) and industrial (100 L) size containers, by maximizing volume average quality in terms of color lightness and sensory "fresh notes" and minimizing volume average total color difference and sensory "cooked notes". Equivalent processes were designed and simulated (P(91)( degrees )(C) = 4.6 min on Alicyclobacillus acidoterrestris spores) and final quality (color, flavor, and aroma attributes) was evaluated. Color was slightly affected by the pasteurization processes, and few differences were observed between the six equivalent treatments designed (T(F) between 80 and 97 degrees C). T(F) >/= 91 degrees C minimized "cooked notes" and maximized "fresh notes" of cupuaçu pulp aroma and flavor for 1 L container. Concerning the 100 L size, the "cooked notes" development can be minimized with T(F) >/= 91 degrees C, but overall the quality was greatly degraded as a result of the long cooling times. A more efficient method to speed up the cooling phase was recommended, especially for the industrial size of containers.

  10. Water quality management library. 2. edition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eckenfelder, W.W.; Malina, J.F.; Patterson, J.W.

    1998-12-31

    A series of ten books offered in conjunction with Water Quality International, the Biennial Conference and Exposition of the International Association on Water Pollution Research and Control (IAWPRC). Volume 1, Activated Sludge Process, Design and Control, 2nd edition, 1998: Volume 2, Upgrading Wastewater Treatment Plants, 2nd edition, 1998: Volume 3, Toxicity Reduction, 2nd edition, 1998: Volume 4, Municipal Sewage Sludge Management, 2nd edition, 1998: Volume 5, Design and Retrofit of Wastewater Treatment Plants for Biological Nutrient Removal, 1st edition, 1992: Volume 6, Dynamics and Control of the Activated Sludge Process, 2nd edition, 1998: Volume 7: Design of Anaerobic Processes formore » the Treatment of Industrial and Municipal Wastes, 1st edition, 1992: Volume 8, Groundwater Remediation, 1st edition, 1992: Volume 9, Nonpoint Pollution and Urban Stormwater Management, 1st edition, 1995: Volume 10, Wastewater Reclamation and Reuse, 1st edition, 1998.« less

  11. New numerical approach for the modelling of machining applied to aeronautical structural parts

    NASA Astrophysics Data System (ADS)

    Rambaud, Pierrick; Mocellin, Katia

    2018-05-01

    The manufacturing of aluminium alloy structural aerospace parts involves several steps: forming (rolling, forging …etc), heat treatments and machining. Before machining, the manufacturing processes have embedded residual stresses into the workpiece. The final geometry is obtained during this last step, when up to 90% of the raw material volume is removed by machining. During this operation, the mechanical equilibrium of the part is in constant evolution due to the redistribution of the initial stresses. This redistribution is the main cause for workpiece deflections during machining and for distortions - after unclamping. Both may lead to non-conformity of the part regarding the geometrical and dimensional specifications and therefore to rejection of the part or additional conforming steps. In order to improve the machining accuracy and the robustness of the process, the effect of the residual stresses has to be considered for the definition of the machining process plan and even in the geometrical definition of the part. In this paper, the authors present two new numerical approaches concerning the modelling of machining of aeronautical structural parts. The first deals with the use of an immersed volume framework to model the cutting step, improving the robustness and the quality of the resulting mesh compared to the previous version. The second is about the mechanical modelling of the machining problem. The authors thus show that in the framework of rolled aluminium parts the use of a linear elasticity model is functional in the finite element formulation and promising regarding the reduction of computation times.

  12. PANDA: A distributed multiprocessor operating system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chubb, P.

    1989-01-01

    PANDA is a design for a distributed multiprocessor and an operating system. PANDA is designed to allow easy expansion of both hardware and software. As such, the PANDA kernel provides only message passing and memory and process management. The other features needed for the system (device drivers, secondary storage management, etc.) are provided as replaceable user tasks. The thesis presents PANDA's design and implementation, both hardware and software. PANDA uses multiple 68010 processors sharing memory on a VME bus, each such node potentially connected to others via a high speed network. The machine is completely homogeneous: there are no differencesmore » between processors that are detectable by programs running on the machine. A single two-processor node has been constructed. Each processor contains memory management circuits designed to allow processors to share page tables safely. PANDA presents a programmers' model similar to the hardware model: a job is divided into multiple tasks, each having its own address space. Within each task, multiple processes share code and data. Tasks can send messages to each other, and set up virtual circuits between themselves. Peripheral devices such as disc drives are represented within PANDA by tasks. PANDA divides secondary storage into volumes, each volume being accessed by a volume access task, or VAT. All knowledge about the way that data is stored on a disc is kept in its volume's VAT. The design is such that PANDA should provide a useful testbed for file systems and device drivers, as these can be installed without recompiling PANDA itself, and without rebooting the machine.« less

  13. Nation-scale adoption of new medicines by doctors: an application of the Bass diffusion model

    PubMed Central

    2012-01-01

    Background The adoption of new medicines is influenced by a complex set of social processes that have been widely examined in terms of individual prescribers’ information-seeking and decision-making behaviour. However, quantitative, population-wide analyses of how long it takes for new healthcare practices to become part of mainstream practice are rare. Methods We applied a Bass diffusion model to monthly prescription volumes of 103 often-prescribed drugs in Australia (monthly time series data totalling 803 million prescriptions between 1992 and 2010), to determine the distribution of adoption rates. Our aim was to test the utility of applying the Bass diffusion model to national-scale prescribing volumes. Results The Bass diffusion model was fitted to the adoption of a broad cross-section of drugs using national monthly prescription volumes from Australia (median R2 = 0.97, interquartile range 0.95 to 0.99). The median time to adoption was 8.2 years (IQR 4.9 to 12.1). The model distinguished two classes of prescribing patterns – those where adoption appeared to be driven mostly by external forces (19 drugs) and those driven mostly by social contagion (84 drugs). Those driven more prominently by internal forces were found to have shorter adoption times (p = 0.02 in a non-parametric analysis of variance by ranks). Conclusion The Bass diffusion model may be used to retrospectively represent the patterns of adoption exhibited in prescription volumes in Australia, and distinguishes between adoption driven primarily by external forces such as regulation, or internal forces such as social contagion. The eight-year delay between the introduction of a new medicine and the adoption of the prescribing practice suggests the presence of system inertia in Australian prescribing practices. PMID:22876867

  14. Formulation and Validation of an Efficient Computational Model for a Dilute, Settling Suspension Undergoing Rotational Mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran

    Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters.more » We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.« less

  15. Universal scaling of permeability through the granular-to-continuum transition

    NASA Astrophysics Data System (ADS)

    Wadsworth, F. B.; Scheu, B.; Heap, M. J.; Kendrick, J. E.; Vasseur, J.; Lavallée, Y.; Dingwell, D. B.

    2015-12-01

    Magmas fragment forming a transiently granular material, which can weld back to a fluid-continuum. This process results in dramatic changes in the gas-volume fraction of the material, which impacts the gas permeability. We collate published data for the gas-volume fraction and permeability of volcanic and synthetic materials which have undergone this process to different amounts and note that in all cases there exists a discontinuity in the relationship between these two properties. By discriminating data for which good microstructural information are provided, we use simple scaling arguments to collapse the data in both the still-granular, high gas-volume fraction regime and the fluid-continuum low gas-volume fraction regime such that a universal description can be achieved. We use this to argue for the microstructural meaning of the well-described discontinuity between gas-permeability and gas-volume fraction and to infer the controls on the position of this transition between dominantly granular and dominantly fluid-continuum material descriptions. As a specific application, we consider the transiently granular magma transported through and deposited in fractures in more-coherent magmas, thought to be a primary degassing pathway in high viscosity systems. We propose that our scaling coupled with constitutive laws for densification can provide insights into the longevity of such degassing channels, informing sub-surface pressure modelling at such volcanoes.

  16. Northeast Artificial Intelligence Consortium Annual Report for 1987. Volume 4. Research in Automated Photointerpretation

    DTIC Science & Technology

    1989-03-01

    KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection

  17. Prediction of Proper Temperatures for the Hot Stamping Process Based on the Kinetics Models

    NASA Astrophysics Data System (ADS)

    Samadian, P.; Parsa, M. H.; Mirzadeh, H.

    2015-02-01

    Nowadays, the application of kinetics models for predicting microstructures of steels subjected to thermo-mechanical treatments has increased to minimize direct experimentation, which is costly and time consuming. In the current work, the final microstructures of AISI 4140 steel sheets after the hot stamping process were predicted using the Kirkaldy and Li kinetics models combined with new thermodynamically based models in order for the determination of the appropriate process temperatures. In this way, the effect of deformation during hot stamping on the Ae3, Acm, and Ae1 temperatures was considered, and then the equilibrium volume fractions of phases at different temperatures were calculated. Moreover, the ferrite transformation rate equations of the Kirkaldy and Li models were modified by a term proposed by Åkerström to consider the influence of plastic deformation. Results showed that the modified Kirkaldy model is satisfactory for the determination of appropriate austenitization temperatures for the hot stamping process of AISI 4140 steel sheets because of agreeable microstructure predictions in comparison with the experimental observations.

  18. Regional magnetic resonance imaging measures for multivariate analysis in Alzheimer's disease and mild cognitive impairment.

    PubMed

    Westman, Eric; Aguilar, Carlos; Muehlboeck, J-Sebastian; Simmons, Andrew

    2013-01-01

    Automated structural magnetic resonance imaging (MRI) processing pipelines are gaining popularity for Alzheimer's disease (AD) research. They generate regional volumes, cortical thickness measures and other measures, which can be used as input for multivariate analysis. It is not clear which combination of measures and normalization approach are most useful for AD classification and to predict mild cognitive impairment (MCI) conversion. The current study includes MRI scans from 699 subjects [AD, MCI and controls (CTL)] from the Alzheimer's disease Neuroimaging Initiative (ADNI). The Freesurfer pipeline was used to generate regional volume, cortical thickness, gray matter volume, surface area, mean curvature, gaussian curvature, folding index and curvature index measures. 259 variables were used for orthogonal partial least square to latent structures (OPLS) multivariate analysis. Normalisation approaches were explored and the optimal combination of measures determined. Results indicate that cortical thickness measures should not be normalized, while volumes should probably be normalized by intracranial volume (ICV). Combining regional cortical thickness measures (not normalized) with cortical and subcortical volumes (normalized with ICV) using OPLS gave a prediction accuracy of 91.5 % when distinguishing AD versus CTL. This model prospectively predicted future decline from MCI to AD with 75.9 % of converters correctly classified. Normalization strategy did not have a significant effect on the accuracies of multivariate models containing multiple MRI measures for this large dataset. The appropriate choice of input for multivariate analysis in AD and MCI is of great importance. The results support the use of un-normalised cortical thickness measures and volumes normalised by ICV.

  19. Biofiltration of methanol vapor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shareefdeen, Z.; Baltzis, B.C.; Oh, Youngsook

    1993-03-05

    Biofiltration of solvent and fuel vapors may offer a cost-effective way to comply with increasingly strict air emission standards. An important step in the development of this technology is to derive and validate mathematical models of the biofiltration process for predictive and scaleup calculations. For the study of methanol vapor biofiltration, an 8-membered bacterial consortium was obtained from methanol-exposed soil. The bacteria were immobilized on solid support and packed into a 5-cm diameter, 60-cm-high column provided with appropriate flowmeters and sampling ports. The solid support was prepared by mixing two volumes of peat with three volumes of perlite particles. Twomore » series of experiments were performed. In the first, the inlet methanol concentration was kept constant while the superficial air velocity was varied from run to run. In the second series, the air flow rate (velocity) was kept constant while the inlet methanol concentration was varied. The unit proved effective in removing methanol at rates up to 112.8 g h[sup [minus]1] m[sup [minus]3] packing. A mathematical model has been derived and validated. The model described and predicted experimental results closely. Both experimental data and model predictions suggest that the methanol biofiltration process was limited by oxygen diffusion and methanol degradation kinetics.« less

  20. Prediction and optimization of the recovery rate in centrifugal separation of platelet-rich plasma (PRP)

    NASA Astrophysics Data System (ADS)

    Piao, Linfeng; Park, Hyungmin; Jo, Chris

    2016-11-01

    We present a theoretical model of the recovery rate of platelet and white blood cell in the process of centrifugal separation of platelet-rich plasma (PRP). For the practically used conditions in the field, the separation process is modeled as a one-dimensional particle sedimentation; a quasi-linear partial differential equation is derived based on the kinematic-wave theory. This is solved to determine the interface positions between supernatant-suspension and suspension-sediment, used to estimate the recovery rate of the plasma. While correcting the Brown's hypothesis (1989) claiming that the platelet recovery is linearly proportional to that of plasma, we propose a new correlation model for prediction of the platelet recovery, which is a function of the volume of whole blood, centrifugal acceleration and time. For a range of practical parameters, such as hematocrit, volume of whole blood and centrifugation (time and acceleration), the predicted recovery rate shows a good agreement with available clinical data. We propose that this model is further used to optimize the preparation method of PRP that satisfies the customized case. Supported by a Grant (MPSS-CG-2016-02) through the Disaster and Safety Management Institute funded by Ministry of Public Safety and Security of Korean government.

  1. ANALYSIS AND REDUCTION OF LANDSAT DATA FOR USE IN A HIGH PLAINS GROUND-WATER FLOW MODEL.

    USGS Publications Warehouse

    Thelin, Gail; Gaydas, Leonard; Donovan, Walter; Mladinich, Carol

    1984-01-01

    Data obtained from 59 Landsat scenes were used to estimate the areal extent of irrigated agriculture over the High Plains region of the United States for a ground-water flow model. This model provides information on current trends in the amount and distribution of water used for irrigation. The analysis and reduction process required that each Landsat scene be ratioed, interpreted, and aggregated. Data reduction by aggregation was an efficient technique for handling the volume of data analyzed. This process bypassed problems inherent in geometrically correcting and mosaicking the data at pixel resolution and combined the individual Landsat classification into one comprehensive data set.

  2. Study of the Evolution of the Electric Structure of a Convective Cloud Using the Data of a Numerical Nonstationary Three-Dimensional Model

    NASA Astrophysics Data System (ADS)

    Veremey, N. E.; Dovgalyuk, Yu. A.; Zatevakhin, M. A.; Ignatyev, A. A.; Morozov, V. N.

    2014-04-01

    Numerical nonstationary three-dimensional model of a convective cloud with parameterized description of microphysical processes with allowance for the electrization processes is considered. The results of numerical modeling of the cloud evolution for the specified atmospheric conditions are presented. The spatio-temporal distribution of the main cloud characteristics including the volume charge density and the electric field is obtained. The calculation results show that the electric structure of the cloud is different at its various life stages, i.e., it varies from unipolar to dipolar and then to tripolar. This conclusion is in fair agreement with the field studies.

  3. Oyster's cells regulatory volume decrease: A new tool for evaluating the toxicity of low concentration hydrocarbons in marine waters.

    PubMed

    Ben Naceur, Chiraz; Maxime, Valérie; Ben Mansour, Hedi; Le Tilly, Véronique; Sire, Olivier

    2016-11-01

    Human activities require fossil fuels for transport and energy, a substantial part of which can accidentally or voluntarily (oil spillage) flow to the marine environment and cause adverse effects in human and ecosystems' health. This experiment was designed to estimate the suitability of an original cellular biomarker to early quantify the biological risk associated to hydrocarbons pollutants in seawater. Oocytes and hepatopancreas cells, isolated from oyster (Crassostrea gigas), were tested for their capacity to regulate their volume following a hypo-osmotic challenge. Cell volumes were estimated from cell images recorded at regular time intervals during a 90min-period. When exposed to diluted seawater (osmolalities from 895 to 712mosmkg(-1)), both cell types first swell and then undergo a shrinkage known as Regulatory Volume Decrease (RVD). This process is inversely proportional to the magnitude of the osmotic shock and is best fitted using a first-order exponential decay model. The Recovered Volume Factor (RVF) calculated from this model appears to be an accurate tool to compare cells responses. As shown by an about 50% decrease in RVF, the RVD process was significantly inhibited in cells sampled from oysters previously exposed to a low concentration of diesel oil (8.4mgL(-1) during 24h). This toxic effect was interpreted as a decreased permeability of the cell membranes resulting from an alteration of their lipidic structure by diesel oil compounds. In contrast, the previous contact of oysters with diesel did not induce any rise in the gills glutathione S-transferase specific activity. Therefore, this work demonstrates that the study of the RVD process of cells selected from sentinel animal species could be an alternative bioassay for the monitoring of hydrocarbons and probably, of various chemicals in the environment liable to alter the cellular regulations. Especially, given the high sensitivity of this biomarker compared with a proven one, it could become a relevant and accurate tool to estimate the biological hazards of micropollutants in the water. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. 1987 Oak Ridge model conference: Proceedings: Volume I, Part 3, Waste Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-01-01

    A conference sponsored by the United States Department of Energy (DOE), was held on waste management. Topics of discussion were transuranic waste management, chemical and physical treatment technologies, waste minimization, land disposal technology and characterization and analysis. Individual projects are processed separately for the data bases. (CBS)

  5. Blade loss transient dynamics analysis. Volume 3: User's manual for TETRA program

    NASA Technical Reports Server (NTRS)

    Black, G. R.; Gallardo, V. C.; Storace, A. S.; Sagendorph, F.

    1981-01-01

    The users manual for TETRA contains program logic, flow charts, error messages, input sheets, modeling instructions, option descriptions, input variable descriptions, and demonstration problems. The process of obtaining a NASTRAN 17.5 generated modal input file for TETRA is also described with a worked sample.

  6. ESTIMATION OF INFILTRATION RATE IN THE VADOSE ZONE: APPLICATION OF SELECTED MATHEMATICAL MODELS - VOLUME II

    EPA Science Inventory

    Movement of water into and through the vadose zone is of great importance to the assessment of contaminant fate and transport, agricultural management, and natural resource protection. The process of water movement is very dynamic, changing dramatically over time and space. Inf...

  7. GED Items. Volume 4, Numbers 1-6.

    ERIC Educational Resources Information Center

    GED Items, 1987

    1987-01-01

    The first of six issues of the GED Items newsletter published in 1987 contains articles on one company's approach to literacy in the workplace, General Educational Development (GED) teacher training videotapes, and a process model for improving thinking skills. Articles in issue 2 address military recruiting, synthesis thinking skills, and GED in…

  8. Innovative Didactic Designs: Visual Analytics and Visual Literacy in School

    ERIC Educational Resources Information Center

    Stenliden, Linnéa; Nissen, Jörgen; Bodén, Ulrika

    2017-01-01

    In a world of massively mediated information and communication, students must learn to handle rapidly growing information volumes inside and outside school. Pedagogy attuned to processing this growing production and communication of information is needed. However, ordinary educational models often fail to support students, trialing neither…

  9. Joint models of GPS and GRACE data of the postseismic deformation following the 2012 Mw 8.6 Indian Ocean earthquake

    NASA Astrophysics Data System (ADS)

    Cheng, X.; Lambert, V.; Masuti, S.; Wang, R.; Barbot, S.; Moore, J. G.; Qiu, Q.; Yu, H.; Wu, S.; Dauwels, J.; Nanjundiah, P.; Bannerjee, P.; Peng, D.

    2017-12-01

    The April 2012 Mw 8.6 Indian Ocean earthquake is the largest strike-slip earthquake instrumentally recorded. The event ruptured multiple faults and reached great depths up to 60 km, which may have induced significant viscoelastic flow in the asthenosphere. Instead of performing the time-consuming iterative forward modeling, our previous studies used linear inversions for postseismic deformation including both afterslip on the coseismic fault and viscoelastic flow in the strain volumes, making use of three-dimensional analytical Green's functions for distributed strain in finite volumes. Constraints and smoothing were added to reduce the degree of freedom in order to obey certain physical laws. The advent of Gravity Recovery and Climate Experiment (GRACE) satellite gravity field data now allows us to measure the mass displacements associated with various Earth processes. In the case of postseismic deformation, viscoelastic flow can potentially lead to significant mass displacements in the asthenosphere, corresponding to the temporal and spatial gravity change. In this new joint model, we add GRACE gravity data to the GPS measurement of postseismic crustal displacement, so as to improve the constraint on the postseismic relaxation processes in the upper mantle.

  10. Quasi-automatic 3D finite element model generation for individual single-rooted teeth and periodontal ligament.

    PubMed

    Clement, R; Schneider, J; Brambs, H-J; Wunderlich, A; Geiger, M; Sander, F G

    2004-02-01

    The paper demonstrates how to generate an individual 3D volume model of a human single-rooted tooth using an automatic workflow. It can be implemented into finite element simulation. In several computational steps, computed tomography data of patients are used to obtain the global coordinates of the tooth's surface. First, the large number of geometric data is processed with several self-developed algorithms for a significant reduction. The most important task is to keep geometrical information of the real tooth. The second main part includes the creation of the volume model for tooth and periodontal ligament (PDL). This is realized with a continuous free form surface of the tooth based on the remaining points. Generating such irregular objects for numerical use in biomechanical research normally requires enormous manual effort and time. The finite element mesh of the tooth, consisting of hexahedral elements, is composed of different materials: dentin, PDL and surrounding alveolar bone. It is capable of simulating tooth movement in a finite element analysis and may give valuable information for a clinical approach without the restrictions of tetrahedral elements. The mesh generator of FE software ANSYS executed the mesh process for hexahedral elements successfully.

  11. Bulk chlorine uptake by polyamide active layers of thin-film composite membranes upon exposure to free chlorine-kinetics, mechanisms, and modeling.

    PubMed

    Powell, Joshua; Luh, Jeanne; Coronell, Orlando

    2014-01-01

    We studied the volume-averaged chlorine (Cl) uptake into the bulk region of the aromatic polyamide active layer of a reverse osmosis membrane upon exposure to free chlorine. Volume-averaged measurements were obtained using Rutherford backscattering spectrometry with samples prepared at a range of free chlorine concentrations, exposure times, and mixing, rinsing, and pH conditions. Our volume-averaged measurements complement previous studies that have quantified Cl uptake at the active layer surface (top ≈ 7 nm) and advance the mechanistic understanding of Cl uptake by aromatic polyamide active layers. Our results show that surface Cl uptake is representative of and underestimates volume-averaged Cl uptake under acidic conditions and alkaline conditions, respectively. Our results also support that (i) under acidic conditions, N-chlorination followed by Orton rearrangement is the dominant Cl uptake mechanism with N-chlorination as the rate-limiting step; (ii) under alkaline conditions, N-chlorination and dechlorination of N-chlorinated amide links by hydroxyl ion are the two dominant processes; and (iii) under neutral pH conditions, the rates of N-chlorination and Orton rearrangement are comparable. We propose a kinetic model that satisfactorily describes Cl uptake under acidic and alkaline conditions, with the largest discrepancies between model and experiment occurring under alkaline conditions at relatively high chlorine exposures.

  12. Thermodynamic Investigation of the Interaction between Polymer and Gases

    NASA Astrophysics Data System (ADS)

    Mahmood, Syed Hassan

    This thesis investigates the interaction between blowing agents and polymer matrix. Existing theoretical model was further developed to accommodate the polymer and blowing agent under study. The obtained results are not only useful for the optimization of the plastic foam fabrication process but also provides a different approach to usage of blowing agents. A magnetic suspension balance and an in-house visualizing dilatometer were used to obtain the sorption of blowing agents in polymer melts under elevated temperature and pressure. The proposed theoretical approach based on the thermodynamic model of SS-EOS is applied to understand the interaction of blowing agents with the polymer melt and one another (in the case of blend blowing agent). An in-depth study of the interaction of a blend of CO2 and DME with PS was conducted. Experimental volume swelling of the blend/PS mixture was measured and compared to the theoretical volume swelling obtained via ternary based SS-EOS, insuring the models validity. The effect of plasticization due to dissolution of DME on the solubility of CO2 in PS was then investigated by utilizing the aforementioned model. It was noted that the dissolution of DME increased the concentration of CO2 in PS and lowering the saturation pressure needed to dissolved a certain amount of CO2 in PS melt. The phenomenon of retrograde vitrification in PMMA induced due dissolution of CO2 was investigated in light of the thermodynamic properties resulting from the interaction of polymer and blowing agent. Solubility and volume swelling were measured in the pressure and temperature ranges promoting vitrification phenomenon, with relation being established between the thermodynamic properties and the vitrification process. Foaming of PMMA was conducted at various temperature values to investigate the application of this phenomenon.

  13. A microphysical parameterization of aqSOA and sulfate formation in clouds

    NASA Astrophysics Data System (ADS)

    McVay, Renee; Ervens, Barbara

    2017-07-01

    Sulfate and secondary organic aerosol (cloud aqSOA) can be chemically formed in cloud water. Model implementation of these processes represents a computational burden due to the large number of microphysical and chemical parameters. Chemical mechanisms have been condensed by reducing the number of chemical parameters. Here an alternative is presented to reduce the number of microphysical parameters (number of cloud droplet size classes). In-cloud mass formation is surface and volume dependent due to surface-limited oxidant uptake and/or size-dependent pH. Box and parcel model simulations show that using the effective cloud droplet diameter (proportional to total volume-to-surface ratio) reproduces sulfate and aqSOA formation rates within ≤30% as compared to full droplet distributions; other single diameters lead to much greater deviations. This single-class approach reduces computing time significantly and can be included in models when total liquid water content and effective diameter are available.

  14. A Note on Spatial Averaging and Shear Stresses Within Urban Canopies

    NASA Astrophysics Data System (ADS)

    Xie, Zheng-Tong; Fuka, Vladimir

    2018-04-01

    One-dimensional urban models embedded in mesoscale numerical models may place several grid points within the urban canopy. This requires an accurate parametrization for shear stresses (i.e. vertical momentum fluxes) including the dispersive stress and momentum sinks at these points. We used a case study with a packing density of 33% and checked rigorously the vertical variation of spatially-averaged total shear stress, which can be used in a one-dimensional column urban model. We found that the intrinsic spatial average, in which the volume or area of the solid parts are not included in the average process, yield greater time-spatial average of total stress within the canopy and a more evident abrupt change at the top of the buildings than the comprehensive spatial average, in which the volume or area of the solid parts are included in the average.

  15. A Study of Green's Function Methods Applied to Space Radiation Protection

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.

    2001-01-01

    The purpose of this research was to study the propagation of galactic ions through various materials. Galactic light ions result from the break up of heavy ion particles and their propagation through materials is modeled using the one-dimensional Boltzmann equation. When ions enter materials there can occur (i) the interaction of ions with orbital electrons which causes ionization within the material and (ii) ions collide with atoms causing production of secondary particles which penetrate deeper within the material. These processes are modeled by a continuum model. The basic idea is to place a control volume within the material and examine the change in ion flux across this control volume. In this way on can derive the basic equations for the transport of light and heavy ions in matter. Green's function perturbation methods can then be employed to solve the resulting equations using energy dependent nuclear cross sections.

  16. Numerical modeling of heat transfer and pasteurizing value during thermal processing of intact egg.

    PubMed

    Abbasnezhad, Behzad; Hamdami, Nasser; Monteau, Jean-Yves; Vatankhah, Hamed

    2016-01-01

    Thermal Pasteurization of Eggs, as a widely used nutritive food, has been simulated. A three-dimensional numerical model, computational fluid dynamics codes of heat transfer equations using heat natural convection, and conduction mechanisms, based on finite element method, was developed to study the effect of air cell size and eggshell thickness. The model, confirmed by comparing experimental and numerical results, was able to predict the temperature profiles, the slowest heating zone, and the required heating time during pasteurization of intact eggs. The results showed that the air cell acted as a heat insulator. Increasing the air cell volume resulted in decreasing of the heat transfer rate, and the increasing the required time of pasteurization (up to 14%). The findings show that the effect on thermal pasteurization of the eggshell thickness was not considerable in comparison to the air cell volume.

  17. Processing and utilization of LiDAR data as a support for a good management of DDBR

    NASA Astrophysics Data System (ADS)

    Nichersu, I.; Grigoras, I.; Constantinescu, A.; Mierla, M.; Tifanov, C.

    2012-04-01

    Danube Delta Biosphere Reserve (DDBR) has 5,800 km2 as surface and it is situated in the South-East of Europe, in the East of Romania. The paper is taking into account the data related to the elevation surfaces of the DDBR (Digital Terrain Model DTM and Digital Surface Model DSM). To produce such kind of models of elevation for the entire area of DDBR it was used the most modern method that utilizes the Light Detection And Ranging (LiDAR). The raw LiDAR data (x, y, z) for each point were transformed into grid formats for DTM and DSM. Based on these data multiple GIS analyses can be done for management purposes : hydraulic modeling 1D2D scenarios, flooding regime and protection, biomass volume estimation, GIS biodiversity processing. These analyses are very useful in the management planning process. The hydraulic modeling 1D2D scenarios are used by the administrative authority to predict the sense of the fluvial water flow and also to predict the places where the flooding could occur. Also it can be predicted the surface of the terrain that will be occupied by the water from floods. Flooding regime gives information about the frequency of the floods and also the intensity of these. In the same time it could be predicted the time of water remanence period. The protection face of the flooding regime is in direct relation with the socio-cultural communities and all their annexes those that are in risk of being flooded. This raises the problem of building dykes and other flooding protection systems. The biomass volume contains information derived from the LiDAR cloud points that describes only the vegetation. The volume of biomass is an important item in the management of a Biosphere Reserve. Also the LiDAR cloud points that refer to vegetation could help in identifying the groups of vegetal association. All these information corroborated with other information build good premises for a good management. Keywords: Danube Delta Biosphere Reserve, LiDAR data, DTM, DSM, flooding, management

  18. Constraining Runoff Source Fraction Contributions from Alpine Glaciers through the Combined Application of Geochemical Proxies and Bayesian Monte Carlo Isotope Mixing Models

    NASA Astrophysics Data System (ADS)

    Arendt, C. A.; Aciego, S.; Hetland, E.

    2015-12-01

    Processes that drive glacial ablation directly impact surrounding ecosystems and communities that are dependent on glacial meltwater as a freshwater reservoir: crucially, freshwater runoff from alpine and Arctic glaciers has large implications for watershed ecosystems and contingent economies. Furthermore, glacial hydrology processes are a complex and fundamental part of understanding high-latitude environments in the modern and predicting how they might change in the future. Specifically, developing better estimates of the origin of freshwater discharge, as well as the duration and amplitude of extreme melting and precipitation events, could provide crucial constraints on these processes and allow for glacial watershed systems to be modeled more effectively. In order to investigate the universality of the temporal and spatial melt relationships that exist in glacial systems, I investigate the isotopic composition of glacial meltwater and proximal seawater including stable isotopes δ18O and δD, which have been measured in glacial water samples I collected from the alpine Athabasca Glacier in the Canadian Rockies. This abstract is focused on extrapolating the relative contributions of meltwater sources - snowmelt, ice melt, and summer precipitation - using a coupled statistical-chemical model (Arendt et al., 2015). I apply δ18O and δD measurements of Athabasca Glacier subglacial water samples to a Bayesian Monte Carlo (BMC) estimation scheme. Importantly, this BMC model also assesses the uncertainties associated with these meltwater fractional contribution estimations, which provides an assessment of how well the system is constrained. By defining the proportion of overall melt that is coming from snow versus ice using stable isotopes, the volume of water generated by ablation can be calculated. This water volume has two important implications. First, communities that depend on glacial water for aquifer recharge can start assessing future water resources, as glacial decline will make snowmelt the dominant water reservoir. Second, the calculated source fraction water volumes are a starting point for additional geochemical models to investigate water storage within the subglacial hydrological network.

  19. Behavior of Aging, Micro-Void, and Self-Healing of Glass/Ceramic Materials and Its Effect on Mechanical Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenning N.; Sun, Xin; Khaleel, Mohammad A.

    This chapter first describes tests to investigate the temporal evolution of the volume fraction of ceramic phases, the evolution of micro-damage, and the self-healing behavior of the glass ceramic sealant used in SOFCs, then a phenomenological model based on mechanical analogs is developed to describe the temperature dependent Young’s modulus of glass ceramic seal materials. It was found that after the initial sintering process, further crystallization of the glass ceramic sealant does not stop, but slows down and reduces the residual glass content while boosting the ceramic crystalline content. Under the long-term operating environment, distinct fibrous and needle-like crystals inmore » the amorphous phase disappeared, and smeared/diffused phase boundaries between the glass phase and ceramic phase were observed. Meanwhile, the micro-damage was induced by the cooling-down process from the operating temperature to the room temperature, which can potentially degrade the mechanical properties of the glass/ceramic sealant. The glass/ceramic sealant self-healed upon reheating to the SOFC operating temperature, which can restore the mechanical performance of the glass/ceramic sealant. The phenomenological model developed here includes the effects of continuing aging and devitrification on the ceramic phase volume fraction and the resulted mechanical properties of glass ceramic seal material are considered. The effects of micro-voids and self-healing are also considered using a continuum damage mechanics (CDM) model. The formulation is for glass/ceramic seal in general, and it can be further developed to account for effects of various processing parameters. This model was applied to G18, and the temperature-dependent experimental measurements were used to calibrate the modeling parameters and to validate the model prediction.« less

  20. Modelling river bank erosion processes and mass failure mechanisms using 2-D depth averaged numerical model

    NASA Astrophysics Data System (ADS)

    Die Moran, Andres; El kadi Abderrezzak, Kamal; Tassi, Pablo; Herouvet, Jean-Michel

    2014-05-01

    Bank erosion is a key process that may cause a large number of economic and environmental problems (e.g. land loss, damage to structures and aquatic habitat). Stream bank erosion (toe erosion and mass failure) represents an important form of channel morphology changes and a significant source of sediment. With the advances made in computational techniques, two-dimensional (2-D) numerical models have become valuable tools for investigating flow and sediment transport in open channels at large temporal and spatial scales. However, the implementation of mass failure process in 2D numerical models is still a challenging task. In this paper, a simple, innovative algorithm is implemented in the Telemac-Mascaret modeling platform to handle bank failure: failure occurs whether the actual slope of one given bed element is higher than the internal friction angle. The unstable bed elements are rotated around an appropriate axis, ensuring mass conservation. Mass failure of a bank due to slope instability is applied at the end of each sediment transport evolution iteration, once the bed evolution due to bed load (and/or suspended load) has been computed, but before the global sediment mass balance is verified. This bank failure algorithm is successfully tested using two laboratory experimental cases. Then, bank failure in a 1:40 scale physical model of the Rhine River composed of non-uniform material is simulated. The main features of the bank erosion and failure are correctly reproduced in the numerical simulations, namely the mass wasting at the bank toe, followed by failure at the bank head, and subsequent transport of the mobilised material in an aggradation front. Volumes of eroded material obtained are of the same order of magnitude as the volumes measured during the laboratory tests.

  1. Water anomalous thermodynamics, attraction, repulsion, and hydrophobic hydration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerdeiriña, Claudio A., E-mail: calvarez@uvigo.es; Debenedetti, Pablo G., E-mail: pdebene@princeton.edu

    A model composed of van der Waals-like and hydrogen bonding contributions that simulates the low-temperature anomalous thermodynamics of pure water while exhibiting a second, liquid-liquid critical point [P. H. Poole et al., Phys. Rev. Lett. 73, 1632 (1994)] is extended to dilute solutions of nonionic species. Critical lines emanating from such second critical point are calculated. While one infers that the smallness of the water molecule may be a relevant factor for those critical lines to move towards experimentally accessible regions, attention is mainly focused on the picture our model draws for the hydration thermodynamics of purely hydrophobic and amphiphilicmore » non-electrolyte solutes. We first focus on differentiating solvation at constant volume from the corresponding isobaric process. Both processes provide the same viewpoint for the low solubility of hydrophobic solutes: it originates from the combination of weak solute-solvent attractive interactions and the specific excluded-volume effects associated with the small molecular size of water. However, a sharp distinction is found when exploring the temperature dependence of hydration phenomena since, in contrast to the situation for the constant-V process, the properties of pure water play a crucial role at isobaric conditions. Specifically, the solubility minimum as well as enthalpy and entropy convergence phenomena, exclusively ascribed to isobaric solvation, are closely related to water’s density maximum. Furthermore, the behavior of the partial molecular volume and the partial molecular isobaric heat capacity highlights the interplay between water anomalies, attraction, and repulsion. The overall picture presented here is supported by experimental observations, simulations, and previous theoretical results.« less

  2. Time-dependent source model of the Lusi mud volcano

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Rudolph, M. L.; Manga, M.

    2014-12-01

    The Lusi mud eruption, near Sidoarjo, East Java, Indonesia, began erupting in May 2006 and continues to erupt today. Previous analyses of surface deformation data suggested an exponential decay of the pressure in the mud source, but did not constrain the geometry and evolution of the source(s) from which the erupting mud and fluids ascend. To understand the spatiotemporal evolution of the mud and fluid sources, we apply a time-dependent inversion scheme to a densely populated InSAR time series of the surface deformation at Lusi. The SAR data set includes 50 images acquired on 3 overlapping tracks of the ALOS L-band satellite between May 2006 and April 2011. Following multitemporal analysis of this data set, the obtained surface deformation time series is inverted in a time-dependent framework to solve for the volume changes of distributed point sources in the subsurface. The volume change distribution resulting from this modeling scheme shows two zones of high volume change underneath Lusi at 0.5-1.5 km and 4-5.5km depth as well as another shallow zone, 7 km to the west of Lusi and underneath the Wunut gas field. The cumulative volume change within the shallow source beneath Lusi is ~2-4 times larger than that of the deep source, whilst the ratio of the Lusi shallow source volume change to that of Wunut gas field is ~1. This observation and model suggest that the Lusi shallow source played a key role in eruption process and mud supply, but that additional fluids do ascend from depths >4 km on eruptive timescales.

  3. Submarine pipeline on-bottom stability. Volume 2: Software and manuals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-12-01

    The state-of-the-art in pipeline stability design has been changing very rapidly recent. The physics governing on-bottom stability are much better understood now than they were eight years. This is due largely because of research and large scale model tests sponsored by PRCI. Analysis tools utilizing this new knowledge have been developed. These tools provide the design engineer with a rational approach have been developed. These tools provide the design engineer with a rational approach for weight coating design, which he can use with confidence because the tools have been developed based on full scale and near full scale model tests.more » These tools represent the state-of-the-art in stability design and model the complex behavior of pipes subjected to both wave and current loads. These include: hydrodynamic forces which account for the effect of the wake (generated by flow over the pipe) washing back and forth over the pipe in oscillatory flow; and the embedment (digging) which occurs as a pipe resting on the seabed is exposed to oscillatory loadings and small oscillatory deflections. This report has been developed as a reference handbook for use in on-bottom pipeline stability analysis It consists of two volumes. Volume one is devoted descriptions of the various aspects of the problem: the pipeline design process; ocean physics, wave mechanics, hydrodynamic forces, and meteorological data determination; geotechnical data collection and soil mechanics; and stability design procedures. Volume two describes, lists, and illustrates the analysis software. Diskettes containing the software and examples of the software are also included in Volume two.« less

  4. Breastfeeding and Childhood IQ: The Mediating Role of Gray Matter Volume.

    PubMed

    Luby, Joan L; Belden, Andy C; Whalen, Diana; Harms, Michael P; Barch, Deanna M

    2016-05-01

    A substantial body of literature has established the positive effect of breastfeeding on child developmental outcomes. There is increasing consensus that breastfed children have higher IQs after accounting for key variables, including maternal education, IQ, and socioeconomic status. Cross-sectional investigations of the effects of breastfeeding on structural brain development suggest that breastfed infants have larger whole brain, cortical, and white matter volumes. To date, few studies have related these measures of brain structure to IQ in breastfed versus nonbreastfed children in a longitudinal sample. Data were derived from the Preschool Depression Study (PDS), a prospective longitudinal study in which children and caregivers were assessed annually for 8 waves over 11 years. A subset completed neuroimaging between the ages of 9.5 and 14.11 years. A total of 148 individuals had breastfeeding data at baseline and complete data on all variables of interest, including IQ and structural neuroimaging. General linear models and process mediation models were used. Breastfed children had significantly higher IQ scores and larger whole brain, total gray matter, total cortical gray matter, and subcortical gray matter volumes compared with the nonbreastfed group in models that covaried for key variables. Subcortical gray matter volume significantly mediated the association between breastfeeding and children's IQ scores. The study findings suggest that the effects of breastfeeding on child IQ are mediated through subcortical gray volume. This effect and putative mechanism is of public health significance and further supports the importance of breastfeeding in mental health promotion. Copyright © 2016 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  5. Breastfeeding and Childhood IQ: The Mediating Role of Gray Matter Volume

    PubMed Central

    Luby, Joan L.; Belden, Andy C.; Whalen, Diana; Harms, Michael P.; Barch, Deanna M.

    2016-01-01

    Objective A substantial body of literature has established the positive effect of breastfeeding on child developmental outcomes. There is increasing consensus that breastfed children have higher IQs after accounting for key variables, including maternal education, IQ, and socioeconomic status. Cross-sectional investigations of the effects of breastfeeding on structural brain development suggest that breastfed infants have larger whole brain, cortical, and white matter volumes. To date, few studies have related these measures of brain structure to IQ in breastfed versus nonbreastfed children in a longitudinal sample. Method Data were derived from the Preschool Depression Study (PDS), a prospective longitudinal study in which children and caregivers were assessed annually for 8 waves over 11 years. A subset completed neuroimaging between the ages of 9.5 and 14.11 years. A total of 148 individuals had breastfeeding data at baseline and complete data on all variables of interest, including IQ and structural neuroimaging. General linear models and process mediation models were used. Results Breastfed children had significantly higher IQ scores and larger whole brain, total gray matter, total cortical gray matter, and subcortical gray matter volumes compared with the nonbreastfed group in models that covaried for key variables. Subcortical gray matter volume significantly mediated the association between breast-feeding and children's IQ scores. Conclusion The study findings suggest that the effects of breastfeeding on child IQ are mediated through subcortical gray volume. This effect and putative mechanism is of public health significance and further supports the importance of breastfeeding in mental health promotion. PMID:27126850

  6. A two-phase debris-flow model that includes coupled evolution of volume fractions, granular dilatancy, and pore-fluid pressure

    USGS Publications Warehouse

    George, David L.; Iverson, Richard M.

    2011-01-01

    Pore-fluid pressure plays a crucial role in debris flows because it counteracts normal stresses at grain contacts and thereby reduces intergranular friction. Pore-pressure feedback accompanying debris deformation is particularly important during the onset of debrisflow motion, when it can dramatically influence the balance of forces governing downslope acceleration. We consider further effects of this feedback by formulating a new, depth-averaged mathematical model that simulates coupled evolution of granular dilatancy, solid and fluid volume fractions, pore-fluid pressure, and flow depth and velocity during all stages of debris-flow motion. To illustrate implications of the model, we use a finite-volume method to compute one-dimensional motion of a debris flow descending a rigid, uniformly inclined slope, and we compare model predictions with data obtained in large-scale experiments at the USGS debris-flow flume. Predictions for the first 1 s of motion show that increasing pore pressures (due to debris contraction) cause liquefaction that enhances flow acceleration. As acceleration continues, however, debris dilation causes dissipation of pore pressures, and this dissipation helps stabilize debris-flow motion. Our numerical predictions of this process match experimental data reasonably well, but predictions might be improved by accounting for the effects of grain-size segregation.

  7. Foundations for Streaming Model Transformations by Complex Event Processing.

    PubMed

    Dávid, István; Ráth, István; Varró, Dániel

    2018-01-01

    Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.

  8. An upscaled two-equation model of transport in porous media through unsteady-state closure of volume averaged formulations

    NASA Astrophysics Data System (ADS)

    Chaynikov, S.; Porta, G.; Riva, M.; Guadagnini, A.

    2012-04-01

    We focus on a theoretical analysis of nonreactive solute transport in porous media through the volume averaging technique. Darcy-scale transport models based on continuum formulations typically include large scale dispersive processes which are embedded in a pore-scale advection diffusion equation through a Fickian analogy. This formulation has been extensively questioned in the literature due to its inability to depict observed solute breakthrough curves in diverse settings, ranging from the laboratory to the field scales. The heterogeneity of the pore-scale velocity field is one of the key sources of uncertainties giving rise to anomalous (non-Fickian) dispersion in macro-scale porous systems. Some of the models which are employed to interpret observed non-Fickian solute behavior make use of a continuum formulation of the porous system which assumes a two-region description and includes a bimodal velocity distribution. A first class of these models comprises the so-called ''mobile-immobile'' conceptualization, where convective and dispersive transport mechanisms are considered to dominate within a high velocity region (mobile zone), while convective effects are neglected in a low velocity region (immobile zone). The mass exchange between these two regions is assumed to be controlled by a diffusive process and is macroscopically described by a first-order kinetic. An extension of these ideas is the two equation ''mobile-mobile'' model, where both transport mechanisms are taken into account in each region and a first-order mass exchange between regions is employed. Here, we provide an analytical derivation of two region "mobile-mobile" meso-scale models through a rigorous upscaling of the pore-scale advection diffusion equation. Among the available upscaling methodologies, we employ the Volume Averaging technique. In this approach, the heterogeneous porous medium is supposed to be pseudo-periodic, and can be represented through a (spatially) periodic unit cell. Consistently with the two-region model working hypotheses, we subdivide the pore space into two volumes, which we select according to the features of the local micro-scale velocity field. Assuming separation of the scales, the mathematical development associated with the averaging method in the two volumes leads to a generalized two-equation model. The final (upscaled) formulation includes the standard first order mass exchange term together with additional terms, which we discuss. Our developments allow to identify the assumptions which are usually implicitly embedded in the usual adoption of a two region mobile-mobile model. All macro-scale properties introduced in this model can be determined explicitly from the pore-scale geometry and hydrodynamics through the solution of a set of closure equations. We pursue here an unsteady closure of the problem, leading to the occurrence of nonlocal (in time) terms in the upscaled system of equations. We provide the solution of the closure problems for a simple application documenting the time dependent and the asymptotic behavior of the system.

  9. A model for calculating eruptive volumes for monogenetic volcanoes — Implication for the Quaternary Auckland Volcanic Field, New Zealand

    NASA Astrophysics Data System (ADS)

    Kereszturi, Gábor; Németh, Károly; Cronin, Shane J.; Agustín-Flores, Javier; Smith, Ian E. M.; Lindsay, Jan

    2013-10-01

    Monogenetic basaltic volcanism is characterised by a complex array of behaviours in the spatial distribution of magma output and also temporal variability in magma flux and eruptive frequency. Investigating this in detail is hindered by the difficulty in evaluating ages of volcanic events as well as volumes erupted in each volcano. Eruptive volumes are an important input parameter for volcanic hazard assessment and may control eruptive scenarios, especially transitions between explosive and effusive behaviour and the length of eruptions. Erosion, superposition and lack of exposure limit the accuracy of volume determination, even for very young volcanoes. In this study, a systematic volume estimation model is developed and applied to the Auckland Volcanic Field in New Zealand. In this model, a basaltic monogenetic volcano is categorised in six parts. Subsurface portions of volcanoes, such as diatremes beneath phreatomagmatic volcanoes, or crater infills, are approximated by geometrical considerations, based on exposed analogue volcanoes. Positive volcanic landforms, such as scoria/spatter cones, tephras rings and lava flow, were defined by using a Light Detection and Ranging (LiDAR) survey-based Digital Surface Model (DSM). Finally, the distal tephra associated with explosive eruptions was approximated using published relationships that relate original crater size to ejecta volumes. Considering only those parts with high reliability, the overall magma output (converted to Dense Rock Equivalent) for the post-250 ka active Auckland Volcanic Field in New Zealand is a minimum of 1.704 km3. This is made up of 1.329 km3 in lava flows, 0.067 km3 in phreatomagmatic crater lava infills, 0.090 km3 within tephra/tuff rings, 0.112 km3 inside crater lava infills, and 0.104 km3 within scoria cones. Using the minimum eruptive volumes, the spatial and temporal magma fluxes are estimated at 0.005 km3/km2 and 0.007 km3/ka. The temporal-volumetric evolution of Auckland is characterised by an increasing magma flux in the last 40 ky, which is inferred to be triggered by plate tectonics processes (e.g. increased asthenospheric shearing and backarc spreading of underneath the Auckland region).

  10. Modelling orange tree root water uptake active area by minimally invasive ERT data and transpiration measurements

    NASA Astrophysics Data System (ADS)

    Vanella, Daniela; Boaga, Jacopo; Perri, Maria Teresa; Consoli, Simona; Cassiani, Giorgio

    2015-04-01

    The comprehension of the hydrological processes involving plant root dynamics is crucial for implementing water saving measures in agriculture. This is particular urgent in areas, like those Mediterranean, characterized by scarce water availability. The study of root water dynamics should not be separated from a more general analysis of the mass and energy fluxes transferred in the soil-plant-atmosphere continuum. In our study, in order to carry this inclusive approach, minimal invasive 3D time-lapse electrical resistivity tomography (ERT) for soil moisture estimation was combined with plant transpiration fluxes directly measured with Sap Flow (SF) techniques and Eddy Covariance methods, and volumetric soil moisture measurements by TDR probes. The main objective of this inclusive approach was to accurately define root-zone water dynamics and individuate the root-area effectively active for water and nutrient uptake process. The monitoring was carried out in Eastern Sicily (south Italy) in summers 2013 and 2014, within an experimental orange orchard farm. During the first year of experiment (October 2013), ERT measurements were carried out around the pertinent volume of one fully irrigated tree, characterized by a vegetation ground cover of 70%; in the second year (June 2014), ERT monitoring was conducted considering a cutting plant, thus to evaluate soil water dynamics without the significant plant transpiration contribution. In order to explore the hydrological dynamics of the root zone volume surrounded by the monitored tree, the resistivity data acquired during the ERT monitoring were converted into soil moisture content distribution by a laboratory calibration based on the soil electrical properties as a function of moisture content and pore water electrical conductivity. By using ERT data in conjunction with the agro-meteorological information (i.e. irrigation rates, rainfall, evapotranspiration by Eddy Covariance, transpiration by Sap Flow and soil moisture content by TRD) of the test area, a spatially distributed one-dimensional (1D) model that solves the Richards' equation was applied; in the model the van Genuchten parameters were obtained by laboratory analysis of soil water retention and soil permeability at saturation. Results of the 1D model were successfully compared with both ERT-based soil moisture dynamics and TDR measurements of soil moisture. The modelling allows to defining the soil volume interested by root water uptake process and its extent. In particular, this volume results significantly smaller (i.e. surface area of 1.75 m2, with 0.4 m cm thickness) than expected, considering the design of the drip irrigation scheme adopted in the farm. The obtained results confirm that ERT is a technique that (i) can provide a lot of information on small scale and vegetation related processes; (ii) the integration with physical modelling is essential to capture the meaning of space-time signal changes; (iii) in the case of the orange orchard, this approach shows that about half of the irrigated water is wasted.

  11. Ac-conductivity and dielectric response of new zinc-phosphate glass/metal composites

    NASA Astrophysics Data System (ADS)

    Maaroufi, A.; Oabi, O.; Lucas, B.

    2016-07-01

    The ac-conductivity and dielectric response of new composites based on zinc-phosphate glass with composition 45 mol%ZnO-55 mol%P2O5, filled with metallic powder of nickel (ZP/Ni) were investigated by impedance spectroscopy in the frequency range from 100 Hz to 1 MHz at room temperature. A high percolating jump of seven times has been observed in the conductivity behavior from low volume fraction of filler to the higher fractions, indicating an insulator - semiconductor phase transition. The measured conductivity at higher filler volume fraction is about 10-1 S/cm and is frequency independent, while, the obtained conductivity for low filler volume fraction is around 10-8 S/cm and is frequency dependent. Moreover, the elaborated composites are characterized by high dielectric constants in the range of 105 for conductive composites at low frequencies (100 Hz). In addition, the distribution of the relaxation processes was also evaluated. The Debye, Cole-Cole, Davidson-Cole and Havriliak-Negami models in electric modulus formalism were used to model the observed relaxation phenomena in ZP/Ni composites. The observed relaxation phenomena are fairly simulated by Davidson-Cole model, and an account of the interpretation of results is given.

  12. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models.

    PubMed

    Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E

    2014-05-01

    The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

  13. A size-dependent constitutive model of bulk metallic glasses in the supercooled liquid region

    PubMed Central

    Yao, Di; Deng, Lei; Zhang, Mao; Wang, Xinyun; Tang, Na; Li, Jianjun

    2015-01-01

    Size effect is of great importance in micro forming processes. In this paper, micro cylinder compression was conducted to investigate the deformation behavior of bulk metallic glasses (BMGs) in supercooled liquid region with different deformation variables including sample size, temperature and strain rate. It was found that the elastic and plastic behaviors of BMGs have a strong dependence on the sample size. The free volume and defect concentration were introduced to explain the size effect. In order to demonstrate the influence of deformation variables on steady stress, elastic modulus and overshoot phenomenon, four size-dependent factors were proposed to construct a size-dependent constitutive model based on the Maxwell-pulse type model previously presented by the authors according to viscosity theory and free volume model. The proposed constitutive model was then adopted in finite element method simulations, and validated by comparing the micro cylinder compression and micro double cup extrusion experimental data with the numerical results. Furthermore, the model provides a new approach to understanding the size-dependent plastic deformation behavior of BMGs. PMID:25626690

  14. Redesigning a joint replacement program using Lean Six Sigma in a Veterans Affairs hospital.

    PubMed

    Gayed, Benjamin; Black, Stephen; Daggy, Joanne; Munshi, Imtiaz A

    2013-11-01

    In April 2009, an analysis of joint replacement surgical procedures at the Richard L. Roudebush Veterans Affairs Medical Center, Indianapolis, Indiana, revealed that total hip and knee replacements incurred $1.4 million in non-Veterans Affairs (VA) care costs with an average length of stay of 6.1 days during fiscal year 2008. The Joint Replacement Program system redesign project was initiated following the Vision-Analysis-Team-Aim-Map-Measure-Change-Sustain (VA-TAMMCS) model to increase efficiency, decrease length of stay, and reduce non-VA care costs. To determine the effectiveness of Lean Six Sigma process improvement methods applied in a VA hospital. Perioperative processes for patients undergoing total joint replacement were redesigned following the VA-TAMMCS model--the VA's official, branded method of Lean Six Sigma process improvement. A multidisciplinary team including the orthopedic surgeons, frontline staff, and executive management identified waste in the current processes and initiated changes to reduce waste and increase efficiency. Data collection included a 1-year baseline period and a 20-month sustainment period. The primary endpoint was length of stay; a secondary analysis considered non-VA care cost reductions. Length of stay decreased 36% overall, decreasing from 5.3 days during the preproject period to 3.4 days during the 20-month sustainment period (P < .001). Non-VA care was completely eliminated for patients undergoing total hip and knee replacement at the Richard L. Roudebush Veterans Affairs Medical Center, producing an estimated return on investment of $1 million annually when compared with baseline cost and volumes. In addition, the volume of total joint replacements at this center increased during the data collection period. The success of the Joint Replacement Program demonstrates that VA-TAMMCS is an effective tool for Lean and Six Sigma process improvement initiatives in a surgical practice, producing a 36% sustained reduction in length of stay and completely eliminating non-VA care for total hip and knee replacements while increasing total joint replacement volume at this medical center.

  15. Numerical analysis of the heating phase and densification mechanism in polymers selective laser melting process

    NASA Astrophysics Data System (ADS)

    Mokrane, Aoulaiche; Boutaous, M'hamed; Xin, Shihe

    2018-05-01

    The aim of this work is to address a modeling of the SLS process at the scale of the part in PA12 polymer powder bed. The powder bed is considered as a continuous medium with homogenized properties, meanwhile understanding multiple physical phenomena occurring during the process and studying the influence of process parameters on the quality of final product. A thermal model, based on enthalpy approach, will be presented with details on the multiphysical couplings that allow the thermal history: laser absorption, melting, coalescence, densification, volume shrinkage and on numerical implementation using FV method. The simulations were carried out in 3D with an in-house developed FORTRAN code. After validation of the model with comparison to results from literature, a parametric analysis will be proposed. Some original results as densification process and the thermal history with the evolution of the material, from the granular solid state to homogeneous melted state will be discussed with regards to the involved physical phenomena.

  16. Prioritization of engineering support requests and advanced technology projects using decision support and industrial engineering models

    NASA Technical Reports Server (NTRS)

    Tavana, Madjid

    1995-01-01

    The evaluation and prioritization of Engineering Support Requests (ESR's) is a particularly difficult task at the Kennedy Space Center (KSC) -- Shuttle Project Engineering Office. This difficulty is due to the complexities inherent in the evaluation process and the lack of structured information. The evaluation process must consider a multitude of relevant pieces of information concerning Safety, Supportability, O&M Cost Savings, Process Enhancement, Reliability, and Implementation. Various analytical and normative models developed over the past have helped decision makers at KSC utilize large volumes of information in the evaluation of ESR's. The purpose of this project is to build on the existing methodologies and develop a multiple criteria decision support system that captures the decision maker's beliefs through a series of sequential, rational, and analytical processes. The model utilizes the Analytic Hierarchy Process (AHP), subjective probabilities, the entropy concept, and Maximize Agreement Heuristic (MAH) to enhance the decision maker's intuition in evaluating a set of ESR's.

  17. Bayesian prediction of future ice sheet volume using local approximation Markov chain Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Davis, A. D.; Heimbach, P.; Marzouk, Y.

    2017-12-01

    We develop a Bayesian inverse modeling framework for predicting future ice sheet volume with associated formal uncertainty estimates. Marine ice sheets are drained by fast-flowing ice streams, which we simulate using a flowline model. Flowline models depend on geometric parameters (e.g., basal topography), parameterized physical processes (e.g., calving laws and basal sliding), and climate parameters (e.g., surface mass balance), most of which are unknown or uncertain. Given observations of ice surface velocity and thickness, we define a Bayesian posterior distribution over static parameters, such as basal topography. We also define a parameterized distribution over variable parameters, such as future surface mass balance, which we assume are not informed by the data. Hyperparameters are used to represent climate change scenarios, and sampling their distributions mimics internal variation. For example, a warming climate corresponds to increasing mean surface mass balance but an individual sample may have periods of increasing or decreasing surface mass balance. We characterize the predictive distribution of ice volume by evaluating the flowline model given samples from the posterior distribution and the distribution over variable parameters. Finally, we determine the effect of climate change on future ice sheet volume by investigating how changing the hyperparameters affects the predictive distribution. We use state-of-the-art Bayesian computation to address computational feasibility. Characterizing the posterior distribution (using Markov chain Monte Carlo), sampling the full range of variable parameters and evaluating the predictive model is prohibitively expensive. Furthermore, the required resolution of the inferred basal topography may be very high, which is often challenging for sampling methods. Instead, we leverage regularity in the predictive distribution to build a computationally cheaper surrogate over the low dimensional quantity of interest (future ice sheet volume). Continual surrogate refinement guarantees asymptotic sampling from the predictive distribution. Directly characterizing the predictive distribution in this way allows us to assess the ice sheet's sensitivity to climate variability and change.

  18. Achieving ICME with Multiscale Modeling: The Effects of Constituent Properties and Processing on the Performance of Laminated Polymer Matrix Composite Structures

    NASA Technical Reports Server (NTRS)

    Pineda, Evan Jorge; Bednarcyk, Brett A.; Arnold, Steven M.

    2014-01-01

    Integrated computational materials engineering (ICME) is a useful approach for tailoring the performance of a material. For fiber-reinforced composites, not only do the properties of the constituents of the composite affect the performance, but so does the architecture (or microstructure) of the constituents. The generalized method of cells is demonstrated to be a viable micromechanics tool for determining the effects of the microstructure on the performance of laminates. The micromechanics is used to predict the inputs for a macroscale model for a variety of different fiber volume fractions, and fiber architectures. Using this technique, the material performance can be tailored for specific applications by judicious selection of constituents, volume fraction, and architectural arrangement given a particular manufacturing scenario

  19. Direct methanol fuel cells: A database-driven design procedure

    NASA Astrophysics Data System (ADS)

    Flipsen, S. F. J.; Spitas, C.

    2011-10-01

    To test the feasibility of DMFC systems in preliminary stages of the design process the design engineer can make use of heuristic models identifying the opportunity of DMFC systems in a specific application. In general these models are to generic and have a low accuracy. To improve the accuracy a second-order model is proposed in this paper. The second-order model consists of an evolutionary algorithm written in Mathematica, which selects a component-set satisfying the fuel-cell systems' performance requirements, places the components in 3D space and optimizes for volume. The results are presented as a 3D draft proposal together with a feasibility metric. To test the algorithm the design of DMFC system applied in the MP3 player is evaluated. The results show that volume and costs are an issue for the feasibility of the fuel-cell power-system applied in the MP3 player. The generated designs and the algorithm are evaluated and recommendations are given.

  20. Environmental quenching of low-mass field galaxies

    NASA Astrophysics Data System (ADS)

    Fillingham, Sean P.; Cooper, Michael C.; Boylan-Kolchin, Michael; Bullock, James S.; Garrison-Kimmel, Shea; Wheeler, Coral

    2018-07-01

    In the local Universe, there is a strong division in the star-forming properties of low-mass galaxies, with star formation largely ubiquitous amongst the field population while satellite systems are predominantly quenched. This dichotomy implies that environmental processes play the dominant role in suppressing star formation within this low-mass regime (M⋆ ˜ 105.5-8 M⊙). As shown by observations of the Local Volume, however, there is a non-negligible population of passive systems in the field, which challenges our understanding of quenching at low masses. By applying the satellite quenching models of Fillingham et al. (2015) to subhalo populations in the Exploring the Local Volume In Simulations suite, we investigate the role of environmental processes in quenching star formation within the nearby field. Using model parameters that reproduce the satellite quenched fraction in the Local Group, we predict a quenched fraction - due solely to environmental effects - of ˜0.52 ± 0.26 within 1 < R/Rvir < 2 of the Milky Way and M31. This is in good agreement with current observations of the Local Volume and suggests that the majority of the passive field systems observed at these distances are quenched via environmental mechanisms. Beyond 2Rvir, however, dwarf galaxy quenching becomes difficult to explain through an interaction with either the Milky Way or M31, such that more isolated, field dwarfs may be self-quenched as a result of star-formation feedback.

  1. Environmental Quenching of Low-Mass Field Galaxies

    NASA Astrophysics Data System (ADS)

    Fillingham, Sean P.; Cooper, Michael C.; Boylan-Kolchin, Michael; Bullock, James S.; Garrison-Kimmel, Shea; Wheeler, Coral

    2018-04-01

    In the local Universe, there is a strong division in the star-forming properties of low-mass galaxies, with star formation largely ubiquitous amongst the field population while satellite systems are predominantly quenched. This dichotomy implies that environmental processes play the dominant role in suppressing star formation within this low-mass regime (M⋆ ˜ 105.5 - 8 M⊙). As shown by observations of the Local Volume, however, there is a non-negligible population of passive systems in the field, which challenges our understanding of quenching at low masses. By applying the satellite quenching models of Fillingham et al. (2015) to subhalo populations in the Exploring the Local Volume In Simulations (ELVIS) suite, we investigate the role of environmental processes in quenching star formation within the nearby field. Using model parameters that reproduce the satellite quenched fraction in the Local Group, we predict a quenched fraction - due solely to environmental effects - of ˜0.52 ± 0.26 within 1 < R/Rvir < 2 of the Milky Way and M31. This is in good agreement with current observations of the Local Volume and suggests that the majority of the passive field systems observed at these distances are quenched via environmental mechanisms. Beyond 2 Rvir, however, dwarf galaxy quenching becomes difficult to explain through an interaction with either the Milky Way or M31, such that more isolated, field dwarfs may be self-quenched as a result of star-formation feedback.

  2. Flat-plate solar array project. Volume 5: Process development

    NASA Technical Reports Server (NTRS)

    Gallagher, B.; Alexander, P.; Burger, D.

    1986-01-01

    The goal of the Process Development Area, as part of the Flat-Plate Solar Array (FSA) Project, was to develop and demonstrate solar cell fabrication and module assembly process technologies required to meet the cost, lifetime, production capacity, and performance goals of the FSA Project. R&D efforts expended by Government, Industry, and Universities in developing processes capable of meeting the projects goals during volume production conditions are summarized. The cost goals allocated for processing were demonstrated by small volume quantities that were extrapolated by cost analysis to large volume production. To provide proper focus and coverage of the process development effort, four separate technology sections are discussed: surface preparation, junction formation, metallization, and module assembly.

  3. Analytical volcano deformation modelling: A new and fast generalized point-source approach with application to the 2015 Calbuco eruption

    NASA Astrophysics Data System (ADS)

    Nikkhoo, M.; Walter, T. R.; Lundgren, P.; Prats-Iraola, P.

    2015-12-01

    Ground deformation at active volcanoes is one of the key precursors of volcanic unrest, monitored by InSAR and GPS techniques at high spatial and temporal resolution, respectively. Modelling of the observed displacements establishes the link between them and the underlying subsurface processes and volume change. The so-called Mogi model and the rectangular dislocation are two commonly applied analytical solutions that allow for quick interpretations based on the location, depth and volume change of pressurized spherical cavities and planar intrusions, respectively. Geological observations worldwide, however, suggest elongated, tabular or other non-equidimensional geometries for the magma chambers. How can these be modelled? Generalized models such as the Davis's point ellipsoidal cavity or the rectangular dislocation solutions, are geometrically limited and could barely improve the interpretation of data. We develop a new analytical artefact-free solution for a rectangular dislocation, which also possesses full rotational degrees of freedom. We construct a kinematic model in terms of three pairwise-perpendicular rectangular dislocations with a prescribed opening only. This model represents a generalized point source in the far field, and also performs as a finite dislocation model for planar intrusions in the near field. We show that through calculating the Eshelby's shape tensor the far-field displacements and stresses of any arbitrary triaxial ellipsoidal cavity can be reproduced by using this model. Regardless of its aspect ratios, the volume change of this model is simply the sum of the volume change of the individual dislocations. Our model can be integrated in any inversion scheme as simply as the Mogi model, profiting at the same time from the advantages of a generalized point source. After evaluating our model by using a boundary element method code, we apply it to ground displacements of the 2015 Calbuco eruption, Chile, observed by the Sentinel-1 satellite. We infer the parameters of a deflating elongated source located beneath Calbuco, and find significant differences to Mogi type solutions. The results imply that interpretations based on our model may help us better understand source characteristics, and in the case of Calubuco volcano infer a volcano-tectonic coupling mechanism.

  4. Accuracy evaluation of Fourier series analysis and singular spectrum analysis for predicting the volume of motorcycle sales in Indonesia

    NASA Astrophysics Data System (ADS)

    Sasmita, Yoga; Darmawan, Gumgum

    2017-08-01

    This research aims to evaluate the performance of forecasting by Fourier Series Analysis (FSA) and Singular Spectrum Analysis (SSA) which are more explorative and not requiring parametric assumption. Those methods are applied to predicting the volume of motorcycle sales in Indonesia from January 2005 to December 2016 (monthly). Both models are suitable for seasonal and trend component data. Technically, FSA defines time domain as the result of trend and seasonal component in different frequencies which is difficult to identify in the time domain analysis. With the hidden period is 2,918 ≈ 3 and significant model order is 3, FSA model is used to predict testing data. Meanwhile, SSA has two main processes, decomposition and reconstruction. SSA decomposes the time series data into different components. The reconstruction process starts with grouping the decomposition result based on similarity period of each component in trajectory matrix. With the optimum of window length (L = 53) and grouping effect (r = 4), SSA predicting testing data. Forecasting accuracy evaluation is done based on Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). The result shows that in the next 12 month, SSA has MAPE = 13.54 percent, MAE = 61,168.43 and RMSE = 75,244.92 and FSA has MAPE = 28.19 percent, MAE = 119,718.43 and RMSE = 142,511.17. Therefore, to predict volume of motorcycle sales in the next period should use SSA method which has better performance based on its accuracy.

  5. Generation of field potentials and modulation of their dynamics through volume integration of cortical activity.

    PubMed

    Kajikawa, Yoshinao; Schroeder, Charles E

    2015-01-01

    Field potentials (FPs) recorded within the brain, often called "local field potentials" (LFPs), are useful measures of net synaptic activity in a neuronal ensemble. However, due to volume conduction, FPs spread beyond regions of underlying synaptic activity, and thus an "LFP" signal may not accurately reflect the temporal patterns of synaptic activity in the immediately surrounding neuron population. To better understand the physiological processes reflected in FPs, we explored the relationship between the FP and its membrane current generators using current source density (CSD) analysis in conjunction with a volume conductor model. The model provides a quantitative description of the spatiotemporal summation of immediate local and more distant membrane currents to produce the FP. By applying the model to FPs in the macaque auditory cortex, we have investigated a critical issue that has broad implications for FP research. We have shown that FP responses in particular cortical layers are differentially susceptible to activity in other layers. Activity in the supragranular layers has the strongest contribution to FPs in other cortical layers, and infragranular FPs are most susceptible to contributions from other layers. To define the physiological processes generating FPs recorded in loci of relatively weak synaptic activity, strong effects produced by synaptic events in the vicinity have to be taken into account. While outlining limitations and caveats inherent to FP measurements, our results also suggest specific peak and frequency band components of FPs can be related to activity in specific cortical layers. These results may help improving the interpretability of FPs. Copyright © 2015 the American Physiological Society.

  6. Generation of field potentials and modulation of their dynamics through volume integration of cortical activity

    PubMed Central

    Schroeder, Charles E.

    2014-01-01

    Field potentials (FPs) recorded within the brain, often called “local field potentials” (LFPs), are useful measures of net synaptic activity in a neuronal ensemble. However, due to volume conduction, FPs spread beyond regions of underlying synaptic activity, and thus an “LFP” signal may not accurately reflect the temporal patterns of synaptic activity in the immediately surrounding neuron population. To better understand the physiological processes reflected in FPs, we explored the relationship between the FP and its membrane current generators using current source density (CSD) analysis in conjunction with a volume conductor model. The model provides a quantitative description of the spatiotemporal summation of immediate local and more distant membrane currents to produce the FP. By applying the model to FPs in the macaque auditory cortex, we have investigated a critical issue that has broad implications for FP research. We have shown that FP responses in particular cortical layers are differentially susceptible to activity in other layers. Activity in the supragranular layers has the strongest contribution to FPs in other cortical layers, and infragranular FPs are most susceptible to contributions from other layers. To define the physiological processes generating FPs recorded in loci of relatively weak synaptic activity, strong effects produced by synaptic events in the vicinity have to be taken into account. While outlining limitations and caveats inherent to FP measurements, our results also suggest specific peak and frequency band components of FPs can be related to activity in specific cortical layers. These results may help improving the interpretability of FPs. PMID:25274348

  7. Long-range laser scanning and 3D imaging for the Gneiss quarries survey

    NASA Astrophysics Data System (ADS)

    Schenker, Filippo Luca; Spataro, Alessio; Pozzoni, Maurizio; Ambrosi, Christian; Cannata, Massimiliano; Günther, Felix; Corboud, Federico

    2016-04-01

    In Canton Ticino (Southern Switzerland), the exploitation of natural stone, mostly gneisses, is an important activity of valley's economies. Nowadays, these economic activities are menaced by (i) the exploitation costs related to geological phenomena such as fractures, faults and heterogeneous rocks that hinder the processing of the stone product, (ii) continuously changing demand because of the evolving natural stone fashion and (iii) increasing administrative limits and rules acting to protect the environment. Therefore, the sustainable development of the sector for the next decades needs new and effective strategies to regulate and plan the quarries. A fundamental step in this process is the building of a 3D geological model of the quarries to constrain the volume of commercial natural stone and the volume of waste. In this context, we conducted Terrestrial Laser Scanning surveys of the quarries in the Maggia Valley to obtain a detailed 3D topography onto which the geological units were mapped. The topographic 3D model was obtained with a long-range laser scanning Riegl VZ4000 that can measure from up to 4 km of distance with a speed of 147,000 points per second. It operates with the new V-line technology, which defines the surface relief by sensing differentiated signals (echoes), even in the presence of obstacles such as vegetation. Depending on the esthetics of the gneisses, we defined seven types of natural stones that, together with faults and joints, were mapped onto the 3D models of the exploitation sites. According to the orientation of the geological limits and structures, we projected the different rock units and fractures into the excavation front. This way, we obtained a 3D geological model from which we can quantitatively estimate the volume of the seven different natural stones (with different commercial value) and waste (with low commercial value). To verify the 3D geological models and to quantify exploited rock and waste volumes the same procedure will be repeated after ca. 6 months. Finally, these 3D geological models can be useful to (i) decrease the exploitation costs because they yield the extraction potential of quarry, (ii) become more efficient in the exploitation and more dynamic in the market because they permit better planning and (iii) decrease the waste by limiting the excavation in regions with low-quality rocks.

  8. Using a Gaussian Process Emulator for Data-driven Surrogate Modelling of a Complex Urban Drainage Simulator

    NASA Astrophysics Data System (ADS)

    Bellos, V.; Mahmoodian, M.; Leopold, U.; Torres-Matallana, J. A.; Schutz, G.; Clemens, F.

    2017-12-01

    Surrogate models help to decrease the run-time of computationally expensive, detailed models. Recent studies show that Gaussian Process Emulators (GPE) are promising techniques in the field of urban drainage modelling. However, this study focusses on developing a GPE-based surrogate model for later application in Real Time Control (RTC) using input and output time series of a complex simulator. The case study is an urban drainage catchment in Luxembourg. A detailed simulator, implemented in InfoWorks ICM, is used to generate 120 input-output ensembles, from which, 100 are used for training the emulator and 20 for validation of the results. An ensemble of historical rainfall events with 2 hours duration and 10 minutes time steps are considered as the input data. Two example outputs, are selected as wastewater volume and total COD concentration in a storage tank in the network. The results of the emulator are tested with unseen random rainfall events from the ensemble dataset. The emulator is approximately 1000 times faster than the original simulator for this small case study. Whereas the overall patterns of the simulator are matched by the emulator, in some cases the emulator deviates from the simulator. To quantify the accuracy of the emulator in comparison with the original simulator, Nash-Sutcliffe efficiency (NSE) between the emulator and simulator is calculated for unseen rainfall scenarios. The range of NSE for the case of tank volume is from 0.88 to 0.99 with a mean value of 0.95, whereas for COD is from 0.71 to 0.99 with a mean value of 0.92. The emulator is able to predict the tank volume with higher accuracy as the relationship between rainfall intensity and tank volume is linear. For COD, which has a non-linear behaviour, the predictions are less accurate and more uncertain, in particular when rainfall intensity increases. This predictions were improved by including a larger amount of training data for the higher rainfall intensities. It was observed that, the accuracy of the emulator predictions depends on the ensemble training dataset design and the amount of data fed. Finally, more investigation is required to test the possibility of applying this type of fast emulators for model-based RTC applications in which limited number of inputs and outputs are considered in a short prediction horizon.

  9. An analysis of electrical conductivity model in saturated porous media

    NASA Astrophysics Data System (ADS)

    Cai, J.; Wei, W.; Qin, X.; Hu, X.

    2017-12-01

    Electrical conductivity of saturated porous media has numerous applications in many fields. In recent years, the number of theoretical methods to model electrical conductivity of complex porous media has dramatically increased. Nevertheless, the process of modeling the spatial conductivity distributed function continues to present challenges when these models used in reservoirs, particularly in porous media with strongly heterogeneous pore-space distributions. Many experiments show a more complex distribution of electrical conductivity data than the predictions derived from the experiential model. Studies have observed anomalously-high electrical conductivity of some low-porosity (tight) formations compared to more- porous reservoir rocks, which indicates current flow in porous media is complex and difficult to predict. Moreover, the change of electrical conductivity depends not only on the pore volume fraction but also on several geometric properties of the more extensive pore network, including pore interconnection and tortuosity. In our understanding of electrical conductivity models in porous media, we study the applicability of several well-known methods/theories to electrical characteristics of porous rocks as a function of pore volume, tortuosity and interconnection, to estimate electrical conductivity based on the micro-geometrical properties of rocks. We analyze the state of the art of scientific knowledge and practice for modeling porous structural systems, with the purpose of identifying current limitations and defining a blueprint for future modeling advances. We compare conceptual descriptions of electrical current flow processes in pore space considering several distinct modeling approaches. Approaches to obtaining more reasonable electrical conductivity models are discussed. Experiments suggest more complex relationships between electrical conductivity and porosity than experiential models, particularly in low-porosity formations. However, the available theoretical models combined with simulations do provide insight to how microscale physics affects macroscale electrical conductivity in porous media.

  10. Multi-scale heat and mass transfer modelling of cell and tissue cryopreservation

    PubMed Central

    Xu, Feng; Moon, Sangjun; Zhang, Xiaohui; Shao, Lei; Song, Young Seok; Demirci, Utkan

    2010-01-01

    Cells and tissues undergo complex physical processes during cryopreservation. Understanding the underlying physical phenomena is critical to improve current cryopreservation methods and to develop new techniques. Here, we describe multi-scale approaches for modelling cell and tissue cryopreservation including heat transfer at macroscale level, crystallization, cell volume change and mass transport across cell membranes at microscale level. These multi-scale approaches allow us to study cell and tissue cryopreservation. PMID:20047939

  11. Dynamic interactions in neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arbib, M.A.; Amari, S.

    The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.

  12. Modeling and Analysis of Power Processing Systems (MAPPS). Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Radman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.

    1980-01-01

    The computer programs and derivations generated in support of the modeling and design optimization program are presented. Programs for the buck regulator, boost regulator, and buck-boost regulator are described. The computer program for the design optimization calculations is presented. Constraints for the boost and buck-boost converter were derived. Derivations of state-space equations and transfer functions are presented. Computer lists for the converters are presented, and the input parameters justified.

  13. Insights about transport mechanisms and fracture flow channeling from multi-scale observations of tracer dispersion in shallow fractured crystalline rock.

    PubMed

    Guihéneuf, N; Bour, O; Boisson, A; Le Borgne, T; Becker, M W; Nigon, B; Wajiduddin, M; Ahmed, S; Maréchal, J-C

    2017-11-01

    In fractured media, solute transport is controlled by advection in open and connected fractures and by matrix diffusion that may be enhanced by chemical weathering of the fracture walls. These phenomena may lead to non-Fickian dispersion characterized by early tracer arrival time, late-time tailing on the breakthrough curves and potential scale effect on transport processes. Here we investigate the scale dependency of these processes by analyzing a series of convergent and push-pull tracer experiments with distance of investigation ranging from 4m to 41m in shallow fractured granite. The small and intermediate distances convergent experiments display a non-Fickian tailing, characterized by a -2 power law slope. However, the largest distance experiment does not display a clear power law behavior and indicates possibly two main pathways. The push-pull experiments show breakthrough curve tailing decreases as the volume of investigation increases, with a power law slope ranging from -3 to -2.3 from the smallest to the largest volume. The multipath model developed by Becker and Shapiro (2003) is used here to evaluate the hypothesis of the independence of flow pathways. The multipath model is found to explain the convergent data, when increasing local dispersivity and reducing the number of pathways with distance which suggest a transition from non-Fickian to Fickian transport at fracture scale. However, this model predicts an increase of tailing with push-pull distance, while the experiments show the opposite trend. This inconsistency may suggest the activation of cross channel mass transfer at larger volume of investigation, which leads to non-reversible heterogeneous advection with scale. This transition from independent channels to connected channels when the volume of investigation increases suggest that both convergent and push-pull breakthrough curves can inform the existence of characteristic length scales. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Insights about transport mechanisms and fracture flow channeling from multi-scale observations of tracer dispersion in shallow fractured crystalline rock

    NASA Astrophysics Data System (ADS)

    Guihéneuf, N.; Bour, O.; Boisson, A.; Le Borgne, T.; Becker, M. W.; Nigon, B.; Wajiduddin, M.; Ahmed, S.; Maréchal, J.-C.

    2017-11-01

    In fractured media, solute transport is controlled by advection in open and connected fractures and by matrix diffusion that may be enhanced by chemical weathering of the fracture walls. These phenomena may lead to non-Fickian dispersion characterized by early tracer arrival time, late-time tailing on the breakthrough curves and potential scale effect on transport processes. Here we investigate the scale dependency of these processes by analyzing a series of convergent and push-pull tracer experiments with distance of investigation ranging from 4 m to 41 m in shallow fractured granite. The small and intermediate distances convergent experiments display a non-Fickian tailing, characterized by a -2 power law slope. However, the largest distance experiment does not display a clear power law behavior and indicates possibly two main pathways. The push-pull experiments show breakthrough curve tailing decreases as the volume of investigation increases, with a power law slope ranging from - 3 to - 2.3 from the smallest to the largest volume. The multipath model developed by Becker and Shapiro (2003) is used here to evaluate the hypothesis of the independence of flow pathways. The multipath model is found to explain the convergent data, when increasing local dispersivity and reducing the number of pathways with distance which suggest a transition from non-Fickian to Fickian transport at fracture scale. However, this model predicts an increase of tailing with push-pull distance, while the experiments show the opposite trend. This inconsistency may suggest the activation of cross channel mass transfer at larger volume of investigation, which leads to non-reversible heterogeneous advection with scale. This transition from independent channels to connected channels when the volume of investigation increases suggest that both convergent and push-pull breakthrough curves can inform the existence of characteristic length scales.

  15. Modeling Hurricane Katrina's merchantable timber and wood damage in south Mississippi using remotely sensed and field-measured data

    NASA Astrophysics Data System (ADS)

    Collins, Curtis Andrew

    Ordinary and weighted least squares multiple linear regression techniques were used to derive 720 models predicting Katrina-induced storm damage in cubic foot volume (outside bark) and green weight tons (outside bark). The large number of models was dictated by the use of three damage classes, three product types, and four forest type model strata. These 36 models were then fit and reported across 10 variable sets and variable set combinations for volume and ton units. Along with large model counts, potential independent variables were created using power transforms and interactions. The basis of these variables was field measured plot data, satellite (Landsat TM and ETM+) imagery, and NOAA HWIND wind data variable types. As part of the modeling process, lone variable types as well as two-type and three-type combinations were examined. By deriving models with these varying inputs, model utility is flexible as all independent variable data are not needed in future applications. The large number of potential variables led to the use of forward, sequential, and exhaustive independent variable selection techniques. After variable selection, weighted least squares techniques were often employed using weights of one over the square root of the pre-storm volume or weight of interest. This was generally successful in improving residual variance homogeneity. Finished model fits, as represented by coefficient of determination (R2), surpassed 0.5 in numerous models with values over 0.6 noted in a few cases. Given these models, an analyst is provided with a toolset to aid in risk assessment and disaster recovery should Katrina-like weather events reoccur.

  16. Approaches in highly parameterized inversion: TSPROC, a general time-series processor to assist in model calibration and result summarization

    USGS Publications Warehouse

    Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.

    2012-01-01

    The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.

  17. Pairing top-down and bottom-up approaches to analyze catchment scale management of water quality and quantity

    NASA Astrophysics Data System (ADS)

    Lovette, J. P.; Duncan, J. M.; Band, L. E.

    2016-12-01

    Watershed management requires information on the hydrologic impacts of local to regional land use, land cover and infrastructure conditions. Management of runoff volumes, storm flows, and water quality can benefit from large scale, "top-down" screening tools, using readily available information, as well as more detailed, "bottom-up" process-based models that explicitly track local runoff production and routing from sources to receiving water bodies. Regional scale data, available nationwide through the NHD+, and top-down models based on aggregated catchment information provide useful tools for estimating regional patterns of peak flows, volumes and nutrient loads at the catchment level. Management impacts can be estimated with these models, but have limited ability to resolve impacts beyond simple changes to land cover proportions. Alternatively, distributed process-based models provide more flexibility in modeling management impacts by resolving spatial patterns of nutrient source, runoff generation, and uptake. This bottom-up approach can incorporate explicit patterns of land cover, drainage connectivity, and vegetation extent, but are typically applied over smaller areas. Here, we first model peak flood flows and nitrogen loads across North Carolina's 70,000 NHD+ catchments using USGS regional streamflow regression equations and the SPARROW model. We also estimate management impact by altering aggregated sources in each of these models. To address the missing spatial implications of the top-down approach, we further explore the demand for riparian buffers as a management strategy, simulating the accumulation of nutrient sources along flow paths and the potential mitigation of these sources through forested buffers. We use the Regional Hydro-Ecological Simulation System (RHESSys) to model changes across several basins in North Carolina's Piedmont and Blue Ridge regions, ranging in size from 15 - 1,130 km2. The two approaches provide a complementary set of tools for large area screening, followed by smaller, more process based assessment and design tools.

  18. High-resolution Episcopic Microscopy (HREM) - Simple and Robust Protocols for Processing and Visualizing Organic Materials

    PubMed Central

    Geyer, Stefan H.; Maurer-Gesek, Barbara; Reissig, Lukas F.; Weninger, Wolfgang J.

    2017-01-01

    We provide simple protocols for generating digital volume data with the high-resolution episcopic microscopy (HREM) method. HREM is capable of imaging organic materials with volumes up to 5 x 5 x 7 mm3 in typical numeric resolutions between 1 x 1 x 1 and 5 x 5 x 5 µm3. Specimens are embedded in methacrylate resin and sectioned on a microtome. After each section an image of the block surface is captured with a digital video camera that sits on the phototube connected to the compound microscope head. The optical axis passes through a green fluorescent protein (GFP) filter cube and is aligned with a position, at which the bock holder arm comes to rest after each section. In this way, a series of inherently aligned digital images, displaying subsequent block surfaces are produced. Loading such an image series in three-dimensional (3D) visualization software facilitates the immediate conversion to digital volume data, which permit virtual sectioning in various orthogonal and oblique planes and the creation of volume and surface rendered computer models. We present three simple, tissue specific protocols for processing various groups of organic specimens, including mouse, chick, quail, frog and zebra fish embryos, human biopsy material, uncoated paper and skin replacement material. PMID:28715372

  19. High-resolution Episcopic Microscopy (HREM) - Simple and Robust Protocols for Processing and Visualizing Organic Materials.

    PubMed

    Geyer, Stefan H; Maurer-Gesek, Barbara; Reissig, Lukas F; Weninger, Wolfgang J

    2017-07-07

    We provide simple protocols for generating digital volume data with the high-resolution episcopic microscopy (HREM) method. HREM is capable of imaging organic materials with volumes up to 5 x 5 x 7 mm 3 in typical numeric resolutions between 1 x 1 x 1 and 5 x 5 x 5 µm 3 . Specimens are embedded in methacrylate resin and sectioned on a microtome. After each section an image of the block surface is captured with a digital video camera that sits on the phototube connected to the compound microscope head. The optical axis passes through a green fluorescent protein (GFP) filter cube and is aligned with a position, at which the bock holder arm comes to rest after each section. In this way, a series of inherently aligned digital images, displaying subsequent block surfaces are produced. Loading such an image series in three-dimensional (3D) visualization software facilitates the immediate conversion to digital volume data, which permit virtual sectioning in various orthogonal and oblique planes and the creation of volume and surface rendered computer models. We present three simple, tissue specific protocols for processing various groups of organic specimens, including mouse, chick, quail, frog and zebra fish embryos, human biopsy material, uncoated paper and skin replacement material.

  20. Application of the Cluster Expansion to a Mathematical Model of the Long Memory Phenomenon in a Financial Market

    NASA Astrophysics Data System (ADS)

    Kuroda, Koji; Maskawa, Jun-ichi; Murai, Joshin

    2013-08-01

    Empirical studies of the high frequency data in stock markets show that the time series of trade signs or signed volumes has a long memory property. In this paper, we present a discrete time stochastic process for polymer model which describes trader's trading strategy, and show that a scale limit of the process converges to superposition of fractional Brownian motions with Hurst exponents and Brownian motion, provided that the index γ of the time scale about the trader's investment strategy coincides with the index δ of the interaction range in the discrete time process. The main tool for the investigation is the method of cluster expansion developed in the mathematical study of statistical mechanics.

  1. Managing Teaching and Learning in Further and Higher Education.

    ERIC Educational Resources Information Center

    Ashcroft, Kate; Foreman-Peck, Lorraine

    This book addresses the practical management of higher education teaching and learning as it is related to any subject. The volume stresses principles of the reflective practitioner model--open-mindedness, responsibility, and wholeheartedness. Chapter 1 introduces the book and its concept of teaching as a management process. Chapter 2 focuses on…

  2. Operational Test and Evaluation Handbook for Aircrew Training Devices. Volume I. Planning and Management.

    DTIC Science & Technology

    1982-02-01

    of i, nd to (! Lvel op an awareness of the T&E roles and responsioi Ii ties Viir~dte various Air Force organizations involved in the T&EC process... mathematical models to determine controller messages and issue controller messages using computer generated speech. AUTOMATED PERFORMANCE ALERTS: Signals

  3. Bridging the Gap between Theory and Practice in Educational Research: Methods at the Margins

    ERIC Educational Resources Information Center

    Winkle-Wagner, Rachelle, Ed.; Hunter, Cheryl A., Ed.; Ortloff, Debora Hinderliter, Ed.

    2009-01-01

    This book provides new ways of thinking about educational processes, using quantitative and qualitative methodologies. Concrete examples of research techniques are provided for those conducting research with marginalized populations or about marginalized ideas. This volume asserts theoretical models related to research methods and the study of…

  4. The Future of Humanities Labor

    ERIC Educational Resources Information Center

    Bauerlein, Mark

    2008-01-01

    "Publish or perish" has long been the formula of academic labor at research universities, but for many humanities professors that imperative has decayed into a simple rule of production. The publish-or-perish model assumed a peer-review process that maintained quality, but more and more it is the bare volume of printed words that counts. When…

  5. Clouds and the Earth's Radiant Energy System (CERES) algorithm theoretical basis document. Volume 1; Overviews (subsystem 0)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Cess, Robert D.; Charlock, Thomas P.; Coakley, James A.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis

    1995-01-01

    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 1 provides both summarized and detailed overviews of the CERES Release 1 data analysis system. CERES will produce global top-of-the-atmosphere shortwave and longwave radiative fluxes at the top of the atmosphere, at the surface, and within the atmosphere by using the combination of a large variety of measurements and models. The CERES processing system includes radiance observations from CERES scanning radiometers, cloud properties derived from coincident satellite imaging radiometers, temperature and humidity fields from meteorological analysis models, and high-temporal-resolution geostationary satellite radiances to account for unobserved times. CERES will provide a continuation of the ERBE record and the lowest error climatology of consistent cloud properties and radiation fields. CERES will also substantially improve our knowledge of the Earth's surface radiation budget.

  6. Replacement of tritiated water from irradiated fuel storage bay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castillo, I.; Boniface, H.; Suppiah, S.

    2015-03-15

    Recently, AECL developed a novel method to reduce tritium emissions (to groundwater) and personnel doses at the NRU (National Research Universal) reactor irradiated fuel storage bay (also known as rod or spent fuel bay) through a water swap process. The light water in the fuel bay had built up tritium that had been transferred from the heavy water moderator through normal fuel transfers. The major advantage of the thermal stratification method was that a very effective tritium reduction could be achieved by swapping a minimal volume of bay water and warm tritiated water would be skimmed off the bay surface.more » A demonstration of the method was done that involved Computational Fluid Dynamics (CFD) modeling of the swap process and a test program that showed excellent agreement with model prediction for the effective removal of almost all the tritium with a minimal water volume. Building on the successful demonstration, AECL fabricated, installed, commissioned and operated a full-scale system to perform a water swap. This full-scale water swap operation achieved a tritium removal efficiency of about 96%.« less

  7. Mechanisms of the 40-70 Day Variability in the Yucatan Channel Volume Transport

    NASA Astrophysics Data System (ADS)

    van Westen, René M.; Dijkstra, Henk A.; Klees, Roland; Riva, Riccardo E. M.; Slobbe, D. Cornelis; van der Boog, Carine G.; Katsman, Caroline A.; Candy, Adam S.; Pietrzak, Julie D.; Zijlema, Marcel; James, Rebecca K.; Bouma, Tjeerd J.

    2018-02-01

    The Yucatan Channel connects the Caribbean Sea with the Gulf of Mexico and is the main outflow region of the Caribbean Sea. Moorings in the Yucatan Channel show high-frequent variability in kinetic energy (50-100 days) and transport (20-40 days), but the physical mechanisms controlling this variability are poorly understood. In this study, we show that the short-term variability in the Yucatan Channel transport has an upstream origin and arises from processes in the North Brazil Current. To establish this connection, we use data from altimetry and model output from several high resolution global models. A significant 40-70 day variability is found in the sea surface height in the North Brazil Current retroflection region with a propagation toward the Lesser Antilles. The frequency of variability is generated by intrinsic processes associated with the shedding of eddies, rather than by atmospheric forcing. This sea surface height variability is able to pass the Lesser Antilles, it propagates westward with the background ocean flow in the Caribbean Sea and finally affects the variability in the Yucatan Channel volume transport.

  8. United States Air Force Summer Research Program 1991. Summer Faculty Research Program (SFRP) Reports. Volume 5B. Wright Laboratory

    DTIC Science & Technology

    1992-01-09

    interfaces of intermetallic-matrix composites (for example, with Ti-Il wt.% Al-14 wt.% Nb and other titanium aluminides combined with various fibers... titanium aluminide intermetallics should be processed, tested and characterized by TEM. These intermetallic-matrix composites (IMC) are important for...these titanium aluminides have a greater CTE mismatch and have been modelled to undergo significant plastic deformation as a result of thermal processing

  9. Conference Proceedings: Aptitude, Learning, and Instruction. Volume 2. Cognitive Process Analyses of Learning and Problem Solving,

    DTIC Science & Technology

    1981-01-01

    of a Complex System; 177 Albert L. Stevens and Allan Collins Introduction 177 Models 182 Conclusion 196 21. Complex Learning Processes 199 John R...have called schemata (Norman, Gentner, & Stevens , 1976), frames (Minsky, 1975), and scripts (Schank & Abelson, 1977). If these other authors are...York: McGraw-Hill. 1975. Norman, D. A.. Gcntner. D. R., & Stevens . A. L. Comments on learning schemata and memory representation. In D). K lahr (Ed

  10. USAF Logistics Process Optimization Study for the Aircraft Asset Sustainment Process. Volume 1.

    DTIC Science & Technology

    1998-12-31

    solely to have a record that could be matched with the CMOS receipt data. (This problem is caused by DLA systems that currently do not populate CMOS with...unable to obtain passwords to the Depot D035 systems. Figure 16 shows daily savings as of 30 September 1998 (current time frame ) and projects savings...Engineering, modeling, and systems/software development company LAN Local Area Network LFA Large Frame Aircraft LMA Logistics Management Agency LMR

  11. Automated array assembly task, phase 1

    NASA Technical Reports Server (NTRS)

    Carbajal, B. G.

    1977-01-01

    State-of-the-art technologies applicable to silicon solar cell and solar cell module fabrication were assessed. The assessment consisted of a technical feasibility evaluation and a cost projection for high volume production of solar cell modules. Design equations based on minimum power loss were used as a tool in the evaluation of metallization technologies. A solar cell process sensitivity study using models, computer calculations, and experimental data was used to identify process step variation and cell output variation correlations.

  12. Analysis of excimer laser radiant exposure effect toward corneal ablation volume at LASIK procedure

    NASA Astrophysics Data System (ADS)

    Adiati, Rima Fitria; Rini Rizki, Artha Bona; Kusumawardhani, Apriani; Setijono, Heru; Rahmadiansah, Andi

    2016-11-01

    LASIK (Laser Asissted In Situ Interlamelar Keratomilieusis) is a technique for correcting refractive disorders of the eye such as myopia and astigmatism using an excimer laser. This procedure use photoablation technique to decompose corneal tissues. Although preferred due to its efficiency, permanency, and accuracy, the inappropriate amount radiant exposure often cause side effects like under-over correction, irregular astigmatism and problems on surrounding tissues. In this study, the radiant exposure effect toward corneal ablation volume has been modelled through several processes. Data collecting results is laser data specifications with 193 nm wavelength, beam diameter of 0.065 - 0.65 cm, and fluence of 160 mJ/cm2. For the medical data, the myopia-astigmatism value, cornea size, corneal ablation thickness, and flap data are taken. The first modelling step is determining the laser diameter between 0.065 - 0.65 cm with 0.45 cm increment. The energy, power, and intensity of laser determined from laser beam area. Number of pulse and total energy is calculated before the radiant exposure of laser is obtained. Next is to determine the parameters influence the ablation volume. Regression method used to create the equation, and then the spot size is substituted to the model. The validation used is statistic correlation method to both experimental data and theory. By the model created, it is expected that any potential complications can be prevented during LASIK procedures. The recommendations can give the users clearer picture to determine the appropriate amount of radiant exposure with the corneal ablation volume necessary.

  13. [Effect of SO2 volume fraction in flue gas on the adsorption behaviors adsorbed by ZL50 activated carbon and kinetic analysis].

    PubMed

    Gao, Ji-xian; Wang, Tie-feng; Wang, Jin-fu

    2010-05-01

    The influence of SO2 dynamic adsorption behaviors using ZL50 activated carbon for flue gas desulphurization and denitrification under different SO2 volume fraction was investigated experimentally, and the kinetic analysis was conducted by kinetic models. With the increase of SO2 volume fraction in flue gas, the SO2 removal ratio and the activity ratio of ZL50 activated carbon decreased, respectively, and SO2 adsorption rate and capacity increased correspondingly. The calculated results indicate that Bangham model has the best prediction effect, the chemisorption processes of SO2 was significantly affected by catalytic oxidative reaction. The adsorption rate constant of Lagergren's pseudo first order model increased with the increase of inlet SO, volume fraction, which indicated that catalytic oxidative reaction of SO2 adsorbed by ZL50 activated carbon may be the rate controlling step in earlier adsorption stage. The Lagergren's and Bangham's initial adsorption rate were deduced and defined, respectively. The Ho's and Elovich's initial adsorption rate were also deduced in this paper. The Bangham's initial adsorption rate values were defined in good agreement with those of experiments. The defined Bangham's adsorptive reaction kinetic model can describe the SO2 dynamic adsorption rate well. The studied results indicated that the SO2 partial order of initial reaction rate was one or adjacent to one, while the O2 and water vapor partial order of initial reaction rate were constants ranging from 0.15-0.20 and 0.45-0.50, respectively.

  14. Quantification of the thorax-to-abdomen breathing ratio for breathing motion modeling.

    PubMed

    White, Benjamin M; Zhao, Tianyu; Lamb, James; Bradley, Jeffrey D; Low, Daniel A

    2013-06-01

    The purpose of this study was to develop a methodology to quantitatively measure the thorax-to-abdomen breathing ratio from a 4DCT dataset for breathing motion modeling and breathing motion studies. The thorax-to-abdomen breathing ratio was quantified by measuring the rate of cross-sectional volume increase throughout the thorax and abdomen as a function of tidal volume. Twenty-six 16-slice 4DCT patient datasets were acquired during quiet respiration using a protocol that acquired 25 ciné scans at each couch position. Fifteen datasets included data from the neck through the pelvis. Tidal volume, measured using a spirometer and abdominal pneumatic bellows, was used as breathing-cycle surrogates. The cross-sectional volume encompassed by the skin contour when compared for each CT slice against the tidal volume exhibited a nearly linear relationship. A robust iteratively reweighted least squares regression analysis was used to determine η(i), defined as the amount of cross-sectional volume expansion at each slice i per unit tidal volume. The sum Ση(i) throughout all slices was predicted to be the ratio of the geometric expansion of the lung and the tidal volume; 1.11. The Xiphoid process was selected as the boundary between the thorax and abdomen. The Xiphoid process slice was identified in a scan acquired at mid-inhalation. The imaging protocol had not originally been designed for purposes of measuring the thorax-to-abdomen breathing ratio so the scans did not extend to the anatomy with η(i) = 0. Extrapolation of η(i)-η(i) = 0 was used to include the entire breathing volume. The thorax and abdomen regions were individually analyzed to determine the thorax-to-abdomen breathing ratios. There were 11 image datasets that had been scanned only through the thorax. For these cases, the abdomen breathing component was equal to 1.11 - Ση(i) where the sum was taken throughout the thorax. The average Ση(i) for thorax and abdomen image datasets was found to be 1.20 ± 0.17, close to the expected value of 1.11. The thorax-to-abdomen breathing ratio was 0.32 ± 0.24. The average Ση(i) was 0.26 ± 0.14 in the thorax and 0.93 ± 0.22 in the abdomen. In the scan datasets that encompassed only the thorax, the average Ση(i) was 0.21 ± 0.11. A method to quantify the relationship between abdomen and thoracic breathing was developed and characterized.

  15. A normal tissue dose response model of dynamic repair processes.

    PubMed

    Alber, Markus; Belka, Claus

    2006-01-07

    A model is presented for serial, critical element complication mechanisms for irradiated volumes from length scales of a few millimetres up to the entire organ. The central element of the model is the description of radiation complication as the failure of a dynamic repair process. The nature of the repair process is seen as reestablishing the structural organization of the tissue, rather than mere replenishment of lost cells. The interactions between the cells, such as migration, involved in the repair process are assumed to have finite ranges, which limits the repair capacity and is the defining property of a finite-sized reconstruction unit. Since the details of the repair processes are largely unknown, the development aims to make the most general assumptions about them. The model employs analogies and methods from thermodynamics and statistical physics. An explicit analytical form of the dose response of the reconstruction unit for total, partial and inhomogeneous irradiation is derived. The use of the model is demonstrated with data from animal spinal cord experiments and clinical data about heart, lung and rectum. The three-parameter model lends a new perspective to the equivalent uniform dose formalism and the established serial and parallel complication models. Its implications for dose optimization are discussed.

  16. Climate Simulations with an Isentropic Finite Volume Dynamical Core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chih-Chieh; Rasch, Philip J.

    2012-04-15

    This paper discusses the impact of changing the vertical coordinate from a hybrid pressure to a hybrid-isentropic coordinate within the finite volume dynamical core of the Community Atmosphere Model (CAM). Results from a 20-year climate simulation using the new model coordinate configuration are compared to control simulations produced by the Eulerian spectral and FV dynamical cores of CAM which both use a pressure-based ({sigma}-p) coordinate. The same physical parameterization package is employed in all three dynamical cores. The isentropic modeling framework significantly alters the simulated climatology and has several desirable features. The revised model produces a better representation of heatmore » transport processes in the atmosphere leading to much improved atmospheric temperatures. We show that the isentropic model is very effective in reducing the long standing cold temperature bias in the upper troposphere and lower stratosphere, a deficiency shared among most climate models. The warmer upper troposphere and stratosphere seen in the isentropic model reduces the global coverage of high clouds which is in better agreement with observations. The isentropic model also shows improvements in the simulated wintertime mean sea-level pressure field in the northern hemisphere.« less

  17. Empirical models to predict the volumes of debris flows generated by recently burned basins in the western U.S.

    USGS Publications Warehouse

    Gartner, J.E.; Cannon, S.H.; Santi, P.M.; deWolfe, V.G.

    2008-01-01

    Recently burned basins frequently produce debris flows in response to moderate-to-severe rainfall. Post-fire hazard assessments of debris flows are most useful when they predict the volume of material that may flow out of a burned basin. This study develops a set of empirically-based models that predict potential volumes of wildfire-related debris flows in different regions and geologic settings. The models were developed using data from 53 recently burned basins in Colorado, Utah and California. The volumes of debris flows in these basins were determined by either measuring the volume of material eroded from the channels, or by estimating the amount of material removed from debris retention basins. For each basin, independent variables thought to affect the volume of the debris flow were determined. These variables include measures of basin morphology, basin areas burned at different severities, soil material properties, rock type, and rainfall amounts and intensities for storms triggering debris flows. Using these data, multiple regression analyses were used to create separate predictive models for volumes of debris flows generated by burned basins in six separate regions or settings, including the western U.S., southern California, the Rocky Mountain region, and basins underlain by sedimentary, metamorphic and granitic rocks. An evaluation of these models indicated that the best model (the Western U.S. model) explains 83% of the variability in the volumes of the debris flows, and includes variables that describe the basin area with slopes greater than or equal to 30%, the basin area burned at moderate and high severity, and total storm rainfall. This model was independently validated by comparing volumes of debris flows reported in the literature, to volumes estimated using the model. Eighty-seven percent of the reported volumes were within two residual standard errors of the volumes predicted using the model. This model is an improvement over previous models in that it includes a measure of burn severity and an estimate of modeling errors. The application of this model, in conjunction with models for the probability of debris flows, will enable more complete and rapid assessments of debris flow hazards following wildfire.

  18. Datamining approaches for modeling tumor control probability.

    PubMed

    Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D

    2010-11-01

    Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.

  19. Framework Programmable Platform for the Advanced Software Development Workstation (FPP/ASDW). Demonstration framework document. Volume 1: Concepts and activity descriptions

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Dewitte, Paul S.; Crump, John W.; Ackley, Keith A.

    1992-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at effectively combining tool and data integration mechanisms with a model of the software development process to provide an intelligent integrated software development environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The Advanced Software Development Workstation (ASDW) program is conducting research into development of advanced technologies for Computer Aided Software Engineering (CASE).

  20. Effects of gestational age on brain volume and cognitive functions in generally healthy very preterm born children during school-age: A voxel-based morphometry study.

    PubMed

    Lemola, Sakari; Oser, Nadine; Urfer-Maurer, Natalie; Brand, Serge; Holsboer-Trachsler, Edith; Bechtel, Nina; Grob, Alexander; Weber, Peter; Datta, Alexandre N

    2017-01-01

    To determine whether the relationship of gestational age (GA) with brain volumes and cognitive functions is linear or whether it follows a threshold model in preterm and term born children during school-age. We studied 106 children (M = 10 years 1 month, SD = 16 months; 40 females) enrolled in primary school: 57 were healthy very preterm children (10 children born 24-27 completed weeks' gestation (extremely preterm), 14 children born 28-29 completed weeks' gestation, 19 children born 30-31 completed weeks' gestation (very preterm), and 14 born 32 completed weeks' gestation (moderately preterm)) all born appropriate for GA (AGA) and 49 term-born children. Neuroimaging involved voxel-based morphometry with the statistical parametric mapping software. Cognitive functions were assessed with the WISC-IV. General Linear Models and multiple regressions were conducted controlling age, sex, and maternal education. Compared to groups of children born 30 completed weeks' gestation and later, children born <28 completed weeks' gestation had less gray matter volume (GMV) and white matter volume (WMV) and poorer cognitive functions including decreased full scale IQ, and processing speed. Differences in GMV partially mediated the relationship between GA and full scale IQ in preterm born children. In preterm children who are born AGA and without major complications GA is associated with brain volume and cognitive functions. In particular, decreased brain volume becomes evident in the extremely preterm group (born <28 completed weeks' gestation). In preterm children born 30 completed weeks' gestation and later the relationship of GA with brain volume and cognitive functions may be less strong as previously thought.

  1. Classifying magnetic resonance image modalities with convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Remedios, Samuel; Pham, Dzung L.; Butman, John A.; Roy, Snehashis

    2018-02-01

    Magnetic Resonance (MR) imaging allows the acquisition of images with different contrast properties depending on the acquisition protocol and the magnetic properties of tissues. Many MR brain image processing techniques, such as tissue segmentation, require multiple MR contrasts as inputs, and each contrast is treated differently. Thus it is advantageous to automate the identification of image contrasts for various purposes, such as facilitating image processing pipelines, and managing and maintaining large databases via content-based image retrieval (CBIR). Most automated CBIR techniques focus on a two-step process: extracting features from data and classifying the image based on these features. We present a novel 3D deep convolutional neural network (CNN)- based method for MR image contrast classification. The proposed CNN automatically identifies the MR contrast of an input brain image volume. Specifically, we explored three classification problems: (1) identify T1-weighted (T1-w), T2-weighted (T2-w), and fluid-attenuated inversion recovery (FLAIR) contrasts, (2) identify pre vs postcontrast T1, (3) identify pre vs post-contrast FLAIR. A total of 3418 image volumes acquired from multiple sites and multiple scanners were used. To evaluate each task, the proposed model was trained on 2137 images and tested on the remaining 1281 images. Results showed that image volumes were correctly classified with 97.57% accuracy.

  2. Model-based adaptive 3D sonar reconstruction in reverberating environments.

    PubMed

    Saucan, Augustin-Alexandru; Sintes, Christophe; Chonavel, Thierry; Caillec, Jean-Marc Le

    2015-10-01

    In this paper, we propose a novel model-based approach for 3D underwater scene reconstruction, i.e., bathymetry, for side scan sonar arrays in complex and highly reverberating environments like shallow water areas. The presence of multipath echoes and volume reverberation generates false depth estimates. To improve the resulting bathymetry, this paper proposes and develops an adaptive filter, based on several original geometrical models. This multimodel approach makes it possible to track and separate the direction of arrival trajectories of multiple echoes impinging the array. Echo tracking is perceived as a model-based processing stage, incorporating prior information on the temporal evolution of echoes in order to reject cluttered observations generated by interfering echoes. The results of the proposed filter on simulated and real sonar data showcase the clutter-free and regularized bathymetric reconstruction. Model validation is carried out with goodness of fit tests, and demonstrates the importance of model-based processing for bathymetry reconstruction.

  3. Production model in the conditions of unstable demand taking into account the influence of trading infrastructure: Ergodicity and its application

    NASA Astrophysics Data System (ADS)

    Obrosova, N. K.; Shananin, A. A.

    2015-04-01

    A production model with allowance for a working capital deficit and a restricted maximum possible sales volume is proposed and analyzed. The study is motivated by an attempt to analyze the problems of functioning of low competitive macroeconomic structures. The model is formalized in the form of a Bellman equation, for which a closed-form solution is found. The stochastic process of product stock variations is proved to be ergodic and its final probability distribution is found. Expressions for the average production load and the average product stock are found by analyzing the stochastic process. A system of model equations relating the model variables to official statistical parameters is derived. The model is identified using data from the Fiat and KAMAZ companies. The influence of the credit interest rate on the firm market value assessment and the production load level are analyzed using comparative statics methods.

  4. A Dirichlet process mixture model for automatic (18)F-FDG PET image segmentation: Validation study on phantoms and on lung and esophageal lesions.

    PubMed

    Giri, Maria Grazia; Cavedon, Carlo; Mazzarotto, Renzo; Ferdeghini, Marco

    2016-05-01

    The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on (18)F-fluorodeoxyglucose positron emission tomography ((18)F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracy was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10-37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This "calibration curve" was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.

  5. A Dirichlet process mixture model for automatic {sup 18}F-FDG PET image segmentation: Validation study on phantoms and on lung and esophageal lesions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giri, Maria Grazia, E-mail: mariagrazia.giri@ospedaleuniverona.it; Cavedon, Carlo; Mazzarotto, Renzo

    Purpose: The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on {sup 18}F-fluorodeoxyglucose positron emission tomography ({sup 18}F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. Methods: The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracymore » was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10–37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Results: Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This “calibration curve” was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. Conclusions: The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.« less

  6. Measuring pedestrian volumes and conflicts. Volume 1, Pedestrian volume sampling

    DOT National Transportation Integrated Search

    1987-12-01

    This final report presents the findings, conclusions, and recommendations of the study conducted to develop a model to predict pedestrian volumes using small sampling schemes. This research produced four pedestrian volume prediction models (i.e., 1-,...

  7. Developing a scalable model of recombinant protein yield from Pichia pastoris: the influence of culture conditions, biomass and induction regime

    PubMed Central

    Holmes, William J; Darby, Richard AJ; Wilks, Martin DB; Smith, Rodney; Bill, Roslyn M

    2009-01-01

    Background The optimisation and scale-up of process conditions leading to high yields of recombinant proteins is an enduring bottleneck in the post-genomic sciences. Typical experiments rely on varying selected parameters through repeated rounds of trial-and-error optimisation. To rationalise this, several groups have recently adopted the 'design of experiments' (DoE) approach frequently used in industry. Studies have focused on parameters such as medium composition, nutrient feed rates and induction of expression in shake flasks or bioreactors, as well as oxygen transfer rates in micro-well plates. In this study we wanted to generate a predictive model that described small-scale screens and to test its scalability to bioreactors. Results Here we demonstrate how the use of a DoE approach in a multi-well mini-bioreactor permitted the rapid establishment of high yielding production phase conditions that could be transferred to a 7 L bioreactor. Using green fluorescent protein secreted from Pichia pastoris, we derived a predictive model of protein yield as a function of the three most commonly-varied process parameters: temperature, pH and the percentage of dissolved oxygen in the culture medium. Importantly, when yield was normalised to culture volume and density, the model was scalable from mL to L working volumes. By increasing pre-induction biomass accumulation, model-predicted yields were further improved. Yield improvement was most significant, however, on varying the fed-batch induction regime to minimise methanol accumulation so that the productivity of the culture increased throughout the whole induction period. These findings suggest the importance of matching the rate of protein production with the host metabolism. Conclusion We demonstrate how a rational, stepwise approach to recombinant protein production screens can reduce process development time. PMID:19570229

  8. A new method for analyzing the refill process and fabrication of a piezoelectric inkjet printing head for LCD color filter manufacturing

    NASA Astrophysics Data System (ADS)

    Moon, Kang Seok; Choi, Jung Hoon; Choi, Dong-June; Kim, Sun Ho; Hyo Ha, Man; Nam, Hyo-Jin; Kim, Min Soo

    2008-12-01

    This paper presents a method for analyzing the refill process of a piezoelectric inkjet printing head with a high firing frequency for color filter manufacturing. Theoretical and experimental studies on the equivalent length (Leq) versus jetting characteristics were performed. The new model has shown quantitatively the same result compared with a commercialized simulation code. Also it is identified that the refill time increases with the equivalent liquid length (Leq) because the viscous force increases. The inkjet printing head has been designed with a lumped model analysis and fabricated with a silicon wafer (1 0 0) by a MEMS process. To investigate how the equivalent length (Leq) influences the firing frequency, an experiment was conducted using a stroboscope. In the case of colorant ink, it is possible to eject an ink droplet up to 5 kHz with a 40 pl drop volume. On the other hand, the firing frequency calculated with the new model is about 3 kHz under the condition of the equivalent liquid length (Leq), 250 µm. The difference between the new model and experiment may be a result of a mismatch of initial meniscus position due to the meniscus oscillation. Experimentally the meniscus oscillation is observed through an optical measurement with a visualization apparatus and a transparent nozzle. Hence the efficiency of the new model may be enhanced in a high viscosity range. The methods for increasing the firing frequency are to reduce the equivalent length (Leq) and to modify the ink property. Because the former tends to decrease a viscous loss and the latter tends to increase a viscous damping, two parameters should be combined adequately within an allowable drop volume.

  9. An immersed boundary-lattice Boltzmann model for biofilm growth and its impact on the NAPL dissolution in porous media

    NASA Astrophysics Data System (ADS)

    Benioug, M.; Yang, X.

    2017-12-01

    The evolution of microbial phase within porous medium is a complex process that involves growth, mortality, and detachment of the biofilm or attachment of moving cells. A better understanding of the interactions among biofilm growth, flow and solute transport and a rigorous modeling of such processes are essential for a more accurate prediction of the fate of pollutants (e.g. NAPLs) in soils. However, very few works are focused on the study of such processes in multiphase conditions (oil/water/biofilm systems). Our proposed numerical model takes into account the mechanisms that control bacterial growth and its impact on the dissolution of NAPL. An Immersed Boundary - Lattice Boltzmann Model (IB-LBM) is developed for flow simulations along with non-boundary conforming finite volume methods (volume of fluid and reconstruction methods) used for reactive solute transport. A sophisticated cellular automaton model is also developed to describe the spatial distribution of bacteria. A series of numerical simulations have been performed on complex porous media. A quantitative diagram representing the transitions between the different biofilm growth patterns is proposed. The bioenhanced dissolution of NAPL in the presence of biofilms is simulated at the pore scale. A uniform dissolution approach has been adopted to describe the temporal evolution of trapped blobs. Our simulations focus on the dissolution of NAPL in abiotic and biotic conditions. In abiotic conditions, we analyze the effect of the spatial distribution of NAPL blobs on the dissolution rate under different assumptions (blobs size, Péclet number). In biotic conditions, different conditions are also considered (spatial distribution, reaction kinetics, toxicity) and analyzed. The simulated results are consistent with those obtained from the literature.

  10. High-Resolution Monitoring of Coastal Dune Erosion and Growth Using an Unmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    Ruessink, G.; Markies, H.; Van Maarseveen, M.

    2014-12-01

    Coastal foredunes lose and gain sand through marine and aeolian processes, but coastal-evolution models that can accurately predict both wave-driven dune erosion and wind-blown dune growth are non-existing. This is, together with a limited understanding of coastal aeolian process dynamics, due to the lack of adequate field data sets from which erosion and supply volumes can be studied simultaneously. Here, we quantify coastal foredune dynamics using nine topographic surveys performed near Egmond aan Zee, The Netherlands, between September 2011 and March 2014 using an unmanned aerial vehicle (UAV). The approximately 0.75-km long study site comprises a 30-100 m wide sandy beach and a 20-25 m high foredune, of which the higher parts are densely vegetated with European marram grass. Using a structure-from-motion workflow, the 200-500 photographs taken during each UAV flight were processed into a point cloud, from which a geo-referenced digital surface model with a 0.25 x 0.25 m resolution was subsequently computed. Our data set contains two dune-erosion events, including that due to storm Xaver (December 2013), which caused one of the highest surge levels in the southern North Sea region for the last decades. Dune erosion during both events varied alongshore from the destruction of embryonic dunes on the upper beach to the slumping of the entire dune face. During the first storm (January 2012), erosion volumes ranged from 5 m3/m in the (former) embryonic dune field to over 40 m3/m elsewhere. During the subsequent 11 (spring - autumn) months, the foredune accreted by (on average) 8 m3/m, again with substantial alongshore variability (0 - 20 m3/m). Intriguingly, volume changes during the 2012-2013 winter were minimal. We will compare the observed aeolian supply rates with model predictions and discuss reasons for their temporal variability. Funded by the Dutch Organisation for Scientific Research NWO.

  11. Micro-optical fabrication by ultraprecision diamond machining and precision molding

    NASA Astrophysics Data System (ADS)

    Li, Hui; Li, Likai; Naples, Neil J.; Roblee, Jeffrey W.; Yi, Allen Y.

    2017-06-01

    Ultraprecision diamond machining and high volume molding for affordable high precision high performance optical elements are becoming a viable process in optical industry for low cost high quality microoptical component manufacturing. In this process, first high precision microoptical molds are fabricated using ultraprecision single point diamond machining followed by high volume production methods such as compression or injection molding. In the last two decades, there have been steady improvements in ultraprecision machine design and performance, particularly with the introduction of both slow tool and fast tool servo. Today optical molds, including freeform surfaces and microlens arrays, are routinely diamond machined to final finish without post machining polishing. For consumers, compression molding or injection molding provide efficient and high quality optics at extremely low cost. In this paper, first ultraprecision machine design and machining processes such as slow tool and fast too servo are described then both compression molding and injection molding of polymer optics are discussed. To implement precision optical manufacturing by molding, numerical modeling can be included in the future as a critical part of the manufacturing process to ensure high product quality.

  12. Theoretical test of Jarzynski's equality for reversible volume-switching processes of an ideal gas system.

    PubMed

    Sung, Jaeyoung

    2007-07-01

    We present an exact theoretical test of Jarzynski's equality (JE) for reversible volume-switching processes of an ideal gas system. The exact analysis shows that the prediction of JE for the free energy difference is the same as the work done on the gas system during the reversible process that is dependent on the shape of path of the reversible volume-switching process.

  13. Numerical simulation of seismic wave propagation from land-excited large volume air-gun source

    NASA Astrophysics Data System (ADS)

    Cao, W.; Zhang, W.

    2017-12-01

    The land-excited large volume air-gun source can be used to study regional underground structures and to detect temporal velocity changes. The air-gun source is characterized by rich low frequency energy (from bubble oscillation, 2-8Hz) and high repeatability. It can be excited in rivers, reservoirs or man-made pool. Numerical simulation of the seismic wave propagation from the air-gun source helps to understand the energy partitioning and characteristics of the waveform records at stations. However, the effective energy recorded at a distance station is from the process of bubble oscillation, which can not be approximated by a single point source. We propose a method to simulate the seismic wave propagation from the land-excited large volume air-gun source by finite difference method. The process can be divided into three parts: bubble oscillation and source coupling, solid-fluid coupling and the propagation in the solid medium. For the first part, the wavelet of the bubble oscillation can be simulated by bubble model. We use wave injection method combining the bubble wavelet with elastic wave equation to achieve the source coupling. Then, the solid-fluid boundary condition is implemented along the water bottom. And the last part is the seismic wave propagation in the solid medium, which can be readily implemented by the finite difference method. Our method can get accuracy waveform of land-excited large volume air-gun source. Based on the above forward modeling technology, we analysis the effect of the excited P wave and the energy of converted S wave due to different water shapes. We study two land-excited large volume air-gun fields, one is Binchuan in Yunnan, and the other is Hutubi in Xinjiang. The station in Binchuan, Yunnan is located in a large irregular reservoir, the waveform records have a clear S wave. Nevertheless, the station in Hutubi, Xinjiang is located in a small man-made pool, the waveform records have very weak S wave. Better understanding of the characteristics of land-excited large volume air-gun can help to better use of the air-gun source.

  14. Anterior Cortical Development During Adolescence in Bipolar Disorder

    PubMed Central

    Najt, Pablo; Wang, Fei; Spencer, Linda; Johnston, Jennifer A.Y.; Cox Lippard, Elizabeth T.; Pittman, Brian P.; Lacadie, Cheryl; Staib, Lawrence H.; Papademetris, Xenophon; Blumberg, Hilary P.

    2015-01-01

    Background Increasing evidence supports a neurodevelopmental model for bipolar disorder (BD), with adolescence as a critical period in its development. Developmental abnormalities of anterior paralimbic and heteromodal frontal cortices, key structures in emotional regulation processes and central in BD, are implicated. However, few longitudinal studies have been conducted, limiting understanding of trajectory alterations in BD. In this study, we performed longitudinal neuroimaging of adolescents with and without BD and assessed volume changes over time, including changes in tissue overall and within gray and white matter. Larger decreases over time in anterior cortical volumes in the adolescents with BD were hypothesized. Gray matter decreases and white matter increases are typically observed during adolescence in anterior cortices. It was hypothesized that volume decreases over time in BD would reflect alterations in those processes, showing larger gray matter contraction and decreased white matter expansion. Methods Two high-resolution magnetic resonance imaging scans were obtained approximately two-years apart for 35 adolescents with BDI and 37 healthy adolescents. Differences over time between groups were investigated for volume overall and specifically for gray and white matter. Results Relative to healthy adolescents, adolescents with BDI showed greater volume contraction over time in a region including insula, and orbitofrontal, rostral and dorsolateral prefrontal cortices (P<.05, corrected), including greater gray matter contraction and decreased white matter expansion over time, in the BD compared to the healthy group. Conclusions: The findings support neurodevelopmental abnormalities during adolescence in BDI in anterior cortices, include altered developmental trajectories of anterior gray and white matter. PMID:26033826

  15. Future supply chains enabled by continuous processing--opportunities and challenges. May 20-21, 2014 Continuous Manufacturing Symposium.

    PubMed

    Srai, Jagjit Singh; Badman, Clive; Krumme, Markus; Futran, Mauricio; Johnston, Craig

    2015-03-01

    This paper examines the opportunities and challenges facing the pharmaceutical industry in moving to a primarily "continuous processing"-based supply chain. The current predominantly "large batch" and centralized manufacturing system designed for the "blockbuster" drug has driven a slow-paced, inventory heavy operating model that is increasingly regarded as inflexible and unsustainable. Indeed, new markets and the rapidly evolving technology landscape will drive more product variety, shorter product life-cycles, and smaller drug volumes, which will exacerbate an already unsustainable economic model. Future supply chains will be required to enhance affordability and availability for patients and healthcare providers alike despite the increased product complexity. In this more challenging supply scenario, we examine the potential for a more pull driven, near real-time demand-based supply chain, utilizing continuous processing where appropriate as a key element of a more "flow-through" operating model. In this discussion paper on future supply chain models underpinned by developments in the continuous manufacture of pharmaceuticals, we have set out; The significant opportunities to moving to a supply chain flow-through operating model, with substantial opportunities in inventory reduction, lead-time to patient, and radically different product assurance/stability regimes. Scenarios for decentralized production models producing a greater variety of products with enhanced volume flexibility. Production, supply, and value chain footprints that are radically different from today's monolithic and centralized batch manufacturing operations. Clinical trial and drug product development cost savings that support more rapid scale-up and market entry models with early involvement of SC designers within New Product Development. The major supply chain and industrial transformational challenges that need to be addressed. The paper recognizes that although current batch operational performance in pharma is far from optimal and not necessarily an appropriate end-state benchmark for batch technology, the adoption of continuous supply chain operating models underpinned by continuous production processing, as full or hybrid solutions in selected product supply chains, can support industry transformations to deliver right-first-time quality at substantially lower inventory profiles. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  16. Quantifying Glacier Volume Change Using UAV-Derived Imagery and Structure from Motion Photogrammetry

    NASA Astrophysics Data System (ADS)

    Decker, C. R.; La Frenierre, J.

    2017-12-01

    Glaciers in the Tropical Andes, like those worldwide, are experiencing rapid ice volume loss due to climate change. Tropical areas are of significant interest in glacier studies because they are especially sensitive to climate change. Quantifying the rate of ice volume loss is important given their sensitivity to climate change and the importance of glacier meltwater for downstream human use. Past studies have found shrinking ice surfaces areas, but finding the actual rate of volume loss gives more information about how glaciers are reacting to climate change as well as the direct hydrological effects of ice volume loss. In this study we determined the rate of ice volume loss for a debris covered section of the Reschreiter Glacier and a portion of the clean ice tongue of the Hans Meyer Glacier on Volcán Chimborazo in Ecuador. Traditional geodetic approaches of measuring ice volume change, including the use of satellite-derived digital elevation models and airborne LIDAR, are difficult in this case due to the small size of Chimborazo's glaciers, frequently cloudy conditions, and limited local resources. Instead, we obtained imagery with an Unmanned Aerial Vehicle (UAV) and processed this imagery using Structure from Motion photogrammetry. Our results are used to evaluate the role of elevation and debris cover as Chimborazo's glaciers respond to climate change.

  17. Brain volumetric changes and cognitive ageing during the eighth decade of life

    PubMed Central

    Dickie, David Alexander; Cox, Simon R.; Valdes Hernandez, Maria del C.; Corley, Janie; Royle, Natalie A.; Pattie, Alison; Aribisala, Benjamin S.; Redmond, Paul; Muñoz Maniega, Susana; Taylor, Adele M.; Sibbett, Ruth; Gow, Alan J.; Starr, John M.; Bastin, Mark E.; Wardlaw, Joanna M.; Deary, Ian J.

    2015-01-01

    Abstract Later‐life changes in brain tissue volumes—decreases in the volume of healthy grey and white matter and increases in the volume of white matter hyperintensities (WMH)—are strong candidates to explain some of the variation in ageing‐related cognitive decline. We assessed fluid intelligence, memory, processing speed, and brain volumes (from structural MRI) at mean age 73 years, and at mean age 76 in a narrow‐age sample of older individuals (n = 657 with brain volumetric data at the initial wave, n = 465 at follow‐up). We used latent variable modeling to extract error‐free cognitive levels and slopes. Initial levels of cognitive ability were predictive of subsequent brain tissue volume changes. Initial brain volumes were not predictive of subsequent cognitive changes. Brain volume changes, especially increases in WMH, were associated with declines in each of the cognitive abilities. All statistically significant results were modest in size (absolute r‐values ranged from 0.114 to 0.334). These results build a comprehensive picture of macrostructural brain volume changes and declines in important cognitive faculties during the eighth decade of life. Hum Brain Mapp 36:4910–4925, 2015. © 2015 The Authors. Human Brain Mapping Published by Wiley Periodicals, Inc PMID:26769551

  18. Inpatient Volume and Quality of Mental Health Care Among Patients With Unipolar Depression.

    PubMed

    Rasmussen, Line Ryberg; Mainz, Jan; Jørgensen, Mette; Videbech, Poul; Johnsen, Søren Paaske

    2018-04-26

    The relationship between inpatient volume and the quality of mental health care remains unclear. This study examined the association between inpatient volume in psychiatric hospital wards and quality of mental health care among patients with depression admitted to wards in Denmark. In a nationwide, population-based cohort study, 17,971 patients (N=21,120 admissions) admitted to psychiatric hospital wards between 2011 and 2016 were identified from the Danish Depression Database. Inpatient volume was categorized into quartiles according to the individual ward's average caseload volume per year during the study period: low volume (quartile 1, <102 inpatients per year), medium volume (quartile 2, 102-172 inpatients per year), high volume (quartile 3, 173-227 inpatients per year) and very high volume (quartile 4, >227 inpatients per year). Quality of mental health care was assessed by receipt of process performance measures reflecting national clinical guidelines for care of depression. Compared with patients admitted to low-volume psychiatric hospital wards, patients admitted to very-high-volume wards were more likely to receive a high overall quality of mental health care (≥80% of the recommended process performance measures) (adjusted relative risk [ARR]=1.78, 95% confidence interval [CI]=1.02-3.09) as well as individual processes of care, including a somatic examination (ARR=1.35, CI=1.03-1.78). Admission to very-high-volume psychiatric hospital wards was associated with a greater chance of receiving guideline-recommended process performance measures for care of depression.

  19. Spatial and temporal variation of residence time and storage volume of subsurface water evaluated by multi-tracers approach in mountainous headwater catchments

    NASA Astrophysics Data System (ADS)

    Tsujimura, Maki; Yano, Shinjiro; Abe, Yutaka; Matsumoto, Takehiro; Yoshizawa, Ayumi; Watanabe, Ysuhito; Ikeda, Koichi

    2015-04-01

    Headwater catchments in mountainous region are the most important recharge area for surface and subsurface waters, additionally time and stock information of the water is principal to understand hydrological processes in the catchments. However, there have been few researches to evaluate variation of residence time and storage volume of subsurface water in time and space at the mountainous headwaters especially with steep slope. We performed an investigation on age dating and estimation of storage volume using simple water budget model in subsurface water with tracing of hydrological flow processes in mountainous catchments underlain by granite, Paleozoic and Tertiary, Yamanashi and Tsukuba, central Japan. We conducted hydrometric measurements and sampling of spring, stream and ground waters in high-flow and low-flow seasons from 2008 through 2012 in the catchments, and CFCs, stable isotopic ratios of oxygen-18 and deuterium, inorganic solute constituent concentrations were determined on all water samples. Residence time of subsurface water ranged from 11 to 60 years in the granite catchments, from 17 to 32 years in the Paleozoic catchments, from 13 to 26 years in the Tertiary catchments, and showed a younger age during the high-flow season, whereas it showed an older age in the low-flow season. Storage volume of subsurface water was estimated to be ranging from 10 ^ 4 to 10 ^ 6 m3 in the granite catchments, from 10 ^ 5 to 10 ^ 7 m3 in the Paleozoic catchments, from 10 ^ 4 to 10 ^ 6 m3 in the Tertiary catchments. In addition, seasonal change of storage volume in the granite catchments was the highest as compared with those of the Paleozoic and the Tertiary catchments. The results suggest that dynamic change of hydrological process seems to cause a larger variation of the residence time and storage volume of subsurface water in time and space in the granite catchments, whereas higher groundwater recharge rate due to frequent fissures or cracks seems to cause larger storage volume of the subsurface water in the Paleozoic catchments though the variation is not so considerable. Also, numerical simulation results support these findings.

  20. Quantifying model uncertainty in seasonal Arctic sea-ice forecasts

    NASA Astrophysics Data System (ADS)

    Blanchard-Wrigglesworth, Edward; Barthélemy, Antoine; Chevallier, Matthieu; Cullather, Richard; Fučkar, Neven; Massonnet, François; Posey, Pamela; Wang, Wanqiu; Zhang, Jinlun; Ardilouze, Constantin; Bitz, Cecilia; Vernieres, Guillaume; Wallcraft, Alan; Wang, Muyin

    2017-04-01

    Dynamical model forecasts in the Sea Ice Outlook (SIO) of September Arctic sea-ice extent over the last decade have shown lower skill than that found in both idealized model experiments and hindcasts of previous decades. Additionally, it is unclear how different model physics, initial conditions or post-processing techniques contribute to SIO forecast uncertainty. In this work, we have produced a seasonal forecast of 2015 Arctic summer sea ice using SIO dynamical models initialized with identical sea-ice thickness in the central Arctic. Our goals are to calculate the relative contribution of model uncertainty and irreducible error growth to forecast uncertainty and assess the importance of post-processing, and to contrast pan-Arctic forecast uncertainty with regional forecast uncertainty. We find that prior to forecast post-processing, model uncertainty is the main contributor to forecast uncertainty, whereas after forecast post-processing forecast uncertainty is reduced overall, model uncertainty is reduced by an order of magnitude, and irreducible error growth becomes the main contributor to forecast uncertainty. While all models generally agree in their post-processed forecasts of September sea-ice volume and extent, this is not the case for sea-ice concentration. Additionally, forecast uncertainty of sea-ice thickness grows at a much higher rate along Arctic coastlines relative to the central Arctic ocean. Potential ways of offering spatial forecast information based on the timescale over which the forecast signal beats the noise are also explored.

  1. Agricultural Handling and Processing Industries; Data Pertinent to an Evaluation of Overtime Exemptions Available under the Fair Labor Standards Act. Volume II, Appendices.

    ERIC Educational Resources Information Center

    Wage and Labor Standards Administration (DOL), Washington, DC.

    Definitions of terms used in the Fair Labor Standards Act and statistical tables compiled from a survey of agricultural processing firms comprise this appendix, which is the second volume of a two volume report. Volume I is available as VT 012 247. (BH)

  2. Debris flow hazard mapping, Hobart, Tasmania, Australia

    NASA Astrophysics Data System (ADS)

    Mazengarb, Colin; Rigby, Ted; Stevenson, Michael

    2015-04-01

    Our mapping on the many dolerite capped mountains in Tasmania indicates that debris flows are a significant geomorphic process operating there. Hobart, the largest city in the State, lies at the foot of one of these mountains and our work is focussed on identifying areas that are susceptible to these events and estimating hazard in the valley systems where residential developments have been established. Geomorphic mapping with the benefit of recent LiDAR and GIS enabled stereo-imagery has allowed us to add to and refine a landslide inventory in our study area. In addition, a dominant geomorphic model has been recognised involving headward gully retreat in colluvial materials associated with rainstorms explains why many past events have occurred and where they may occur in future. In this paper we will review the landslide inventory including a large event (~200 000m3) in 1872 that affected a lightly populated area but since heavily urbanised. From this inventory we have attempted volume-mobility relationships, magnitude-frequency curves and likelihood estimates. The estimation of volume has been challenging to determine given that the area of depletion for each debris flow feature is typically difficult to distinguish from the total affected area. However, where LiDAR data exists, this uncertainty is substantially reduced and we develop width-length relationships (area of depletion) and area-volume relationships to estimate volume for the whole dataset exceeding 300 features. The volume-mobility relationship determined is comparable to international studies and in the absence of reliable eye-witness accounts, suggests that most of the features can be explained as single event debris flows, without requiring more complex mechanisms (such as those that form temporary debris dams that subsequently fail) as proposed by others previously. Likelihood estimates have also been challenging to derive given that almost all of the events have not been witnessed, some are constrained by aerial photographs to decade precision and many predate regional photography (pre 1940's). We have performed runout modelling, using 2D hydraulic modelling software (RiverFlow2D with Mud and Debris module), in order to calibrate our model against real events and gain confidence in the choice of parameters. Runout modelling was undertaken in valley systems with volumes calibrated to existing flood model likelihoods for each catchment. The hazard outputs from our models require developing a translation to hazard models used in Australia. By linking to flood mapping we aim to demonstrate to emergency managers where existing mitigation measures may be inadequate and how they can be adapted to address multiple hazards.

  3. Design and Analysis of A Beacon-Less Routing Protocol for Large Volume Content Dissemination in Vehicular Ad Hoc Networks.

    PubMed

    Hu, Miao; Zhong, Zhangdui; Ni, Minming; Baiocchi, Andrea

    2016-11-01

    Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors' best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well.

  4. Design and Analysis of A Beacon-Less Routing Protocol for Large Volume Content Dissemination in Vehicular Ad Hoc Networks

    PubMed Central

    Hu, Miao; Zhong, Zhangdui; Ni, Minming; Baiocchi, Andrea

    2016-01-01

    Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors’ best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well. PMID:27809285

  5. Post-void residual urinary volume is an independent predictor of biopsy results in men at risk for prostate cancer.

    PubMed

    Cormio, Luigi; Lucarelli, Giuseppe; Netti, Giuseppe Stefano; Stallone, Giovanni; Selvaggio, Oscar; Troiano, Francesco; Di Fino, Giuseppe; Sanguedolce, Francesca; Bufo, Pantaleo; Grandaliano, Giuseppe; Carrieri, Giuseppe

    2015-04-01

    to determine whether peak flow rate (PFR) and post-void residual urinary volume (PVRUV) predict prostate biopsy outcome. The study population consisted of 1780 patients undergoing first prostate biopsy. Patients with prostate cancer (PCa) had significantly greater prostate-specific antigen (PSA) and PFR but lower prostate volume (PVol) and PVRUV than those without PCa. Receiver operator characteristic curve analysis showed that PVol and PVRUV were the most accurate predictors of biopsy outcome. The addition of PVRUV to the multivariate logistic regression model based on standard clinical parameters (age, PSA, digital rectal examination, PVol) significantly increased the predictive accuracy of the model in both the population overall (79% vs. 77%; p=0.001) and patients with PSA levels up to 10 ng/ml (74.3% vs. 71.7%; p=0.005). PVRUV seems to be an accurate non-invasive test to predict biopsy outcome that can be used alone or in combination with PVol in the decision-making process for men potentially facing a prostate biopsy. Copyright© 2015 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  6. Coupled discrete element and finite volume solution of two classical soil mechanics problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Feng; Drumm, Eric; Guiochon, Georges A

    One dimensional solutions for the classic critical upward seepage gradient/quick condition and the time rate of consolidation problems are obtained using coupled routines for the finite volume method (FVM) and discrete element method (DEM), and the results compared with the analytical solutions. The two phase flow in a system composed of fluid and solid is simulated with the fluid phase modeled by solving the averaged Navier-Stokes equation using the FVM and the solid phase is modeled using the DEM. A framework is described for the coupling of two open source computer codes: YADE-OpenDEM for the discrete element method and OpenFOAMmore » for the computational fluid dynamics. The particle-fluid interaction is quantified using a semi-empirical relationship proposed by Ergun [12]. The two classical verification problems are used to explore issues encountered when using coupled flow DEM codes, namely, the appropriate time step size for both the fluid and mechanical solution processes, the choice of the viscous damping coefficient, and the number of solid particles per finite fluid volume.« less

  7. Optimisation of colour stability of cured ham during packaging and retail display by a multifactorial design.

    PubMed

    Møller, Jens K S; Jakobsen, Marianne; Weber, Claus J; Martinussen, Torben; Skibsted, Leif H; Bertelsen, Grete

    2003-02-01

    A multifactorial design, including (1) percent residual oxygen, (2) oxygen transmission rate of packaging film (OTR), (3) product to headspace volume ratio, (4) illuminance level and (5) nitrite level during curing, was established to investigate factors affecting light-induced oxidative discoloration of cured ham (packaged in modified atmosphere of 20% carbon dioxide and balanced with nitrogen) during 14 days of chill storage. Univariate statistical analysis found significant effects of all main factors on the redness (tristimulus a-value) of the ham. Subsequently, Response Surface Modelling of the data further proved that the interactions between packaging and storage conditions are important when optimising colour stability. The measured content of oxygen in the headspace was incorporated in the model and the interaction between measured oxygen content in the headspace and the product to headspace volume ratio was found to be crucial. Thus, it is not enough to keep the headspace oxygen level low, if the headspace volume at the same time is large, there will still be sufficient oxygen for colour deteriorating processes to take place.

  8. Structure-from-Motion as a method to quantify erosion volumes and to identify sediment sources in eroding rills

    NASA Astrophysics Data System (ADS)

    Brings, Christine; André Remke, Alexander; Gronz, Oliver; Becker, Kerstin; Seeger, Manuel; Ries, Johannes B.

    2014-05-01

    One particular problem in the study of rill erosion is the lack of information about sediment sources. So far, the sediment sources can only be identified by observation during the event or the experiment. Furthermore, only large and clear visible changes are considered and observations do not allow the quantification of erosion rates. A solution to this problem can be provided by 3D-modeling using the Structure from Motion (SfM)technique. Digital elevation models (DEM) from terrestrial and aircraft based images have been produced for many years; however, traditional photogrammetric analysis techniques require considerable expertise both for imaging and for data processing. The recent development of SfM providing for geoscientific applications the potential and greatly facilitated conditions for creating accurate 3D models from terrestrial and aerial photographs that were recorded by standard, non-metric cameras. Before and after the rill erosion experiments, coherent and largely overlapping terrestrial photos have been acquired. Afterwards, VisualSfM constructs 3D models by searching unique features in single images, searching for common features in image pairs and by triangulation of camera and feature positions using these pairs. The results are point clouds with x-, y- and z-coordinates, which are the basis for the preparation of the 3D-digital elevation models or volumetric surface models. The before and after models are all in their own, arbitrary coordinate systems and therefore they need to be superimposed and scaled. From the point clouds, surface models are created and via difference calculations of the before and after models, sediment sources can be detected, and erosion volumes can be quantified. Until now, the volume deviations between the 3D models and reference volumes do not exceed 10%. The noise of the 3D models in the worst dimension (z-axis) does not exceed the pixel spacing times 4-5. The results show that VisualSfM is a good, easy to apply and economic alternative to other imaging systems like laser scanning or standard software like Leica Photogrammetry Suite.

  9. High Case Volumes and Surgical Fellowships are Associated with Improved Outcomes for Bariatric Surgery Patients: A Justification of Current Credentialing Initiatives for Practice and Training

    PubMed Central

    Kohn, Geoffrey P; Galanko, Joseph A; Overby, D Wayne; Farrell, Timothy M

    2010-01-01

    Background Recent years have seen the establishment of bariatric surgery credentialing processes, centers-of-excellence programs and fellowship training positions. The effects of center-of-excellence status and of the presence of training programs have not previously been examined. The objective of this study is to examine the effects of case volume, center-of-excellence status and training programs on early outcomes of bariatric surgery. Study Design Data were obtained from the Nationwide Inpatient Sample from 1998 to 2006. Quantification of patients’ comorbidities was made using the Charlson Index. Using logistic regression modeling, annual case volumes were analyzed for an association with each institution’s center-of-excellence status and training program status. Risk-adjusted outcome measures were calculated for these hospital-level parameters. Results Data from 102,069 bariatric operations were obtained. Adjusting for comorbidities, greater bariatric case volume was associated with improvements in the incidence of total complications (odds ratio [OR] = 0.99937 for each single case increase, p=0.01), in-hospital mortality (OR = 0.99717, p<0.01), and most other complications. Hospitals with a Fellowship Council-affiliated gastrointestinal surgery training program were associated with risk-adjusted improvements in rates of splenectomy (OR = 0.2853, p<0.001) and bacterial pneumonias (OR = 0.65898, p=0.02). Center-of-excellence status, irrespective of the accrediting entity, had minimal independent association with outcome. A surgical residency program had a varying association with outcomes. Conclusions The hypothesized positive volume-outcome relationship of bariatric surgery is shown without arbitrarily categorizing hospitals to case volume groups, by analysis of volume as a continuous variable. Institutions with a dedicated fellowship training program have also been shown, in part, to be associated with improved outcomes. The concept of volume-dependent center-of-excellence programs is supported, though no independent association with the credentialing process is noted. PMID:20510799

  10. Cost-effective approach to ethanol production and optimization by response surface methodology.

    PubMed

    Uncu, Oya Nihan; Cekmecelioglu, Deniz

    2011-04-01

    Food wastes disposed from residential and industrial kitchens have gained attention as a substrate in microbial fermentations to reduce product costs. In this study, the potential of simultaneously hydrolyzing and subsequently fermenting the mixed carbohydrate components of kitchen wastes were assessed and the effects of solid load, inoculum volume of baker's yeast, and fermentation time on ethanol production were evaluated by response surface methodology (RSM). The enzymatic hydrolysis process was complete within 6h. Fermentation experiments were conducted at pH 4.5, a temperature of 30°C, and agitated at 150 rpm without adding the traditional fermentation nutrients. The statistical analysis of the model developed by RSM suggested that linear effects of solid load, inoculum volume, and fermentation time and the quadratic effects of inoculum volume and fermentation time were significant (P<0.05). The verification experiments indicated that the developed model could be successfully used to predict ethanol concentration at >90% accuracy. An optimum ethanol concentration of 32.2g/l giving a yield of 0.40g/g, comparable to yields reported to date, was suggested by the model with 20% solid load, 8.9% inoculum volume, and 58.8h of fermentation. The results indicated that the production costs can be lowered to a large extent by using kitchen wastes having multiple carbohydrate components and eliminating the use of traditional fermentation nutrients from the recipe. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. The significance of the choice of radiobiological (NTCP) models in treatment plan objective functions.

    PubMed

    Miller, J; Fuller, M; Vinod, S; Suchowerska, N; Holloway, L

    2009-06-01

    A Clinician's discrimination between radiation therapy treatment plans is traditionally a subjective process, based on experience and existing protocols. A more objective and quantitative approach to distinguish between treatment plans is to use radiobiological or dosimetric objective functions, based on radiobiological or dosimetric models. The efficacy of models is not well understood, nor is the correlation of the rank of plans resulting from the use of models compared to the traditional subjective approach. One such radiobiological model is the Normal Tissue Complication Probability (NTCP). Dosimetric models or indicators are more accepted in clinical practice. In this study, three radiobiological models, Lyman NTCP, critical volume NTCP and relative seriality NTCP, and three dosimetric models, Mean Lung Dose (MLD) and the Lung volumes irradiated at 10Gy (V10) and 20Gy (V20), were used to rank a series of treatment plans using, harm to normal (Lung) tissue as the objective criterion. None of the models considered in this study showed consistent correlation with the Radiation Oncologists plan ranking. If radiobiological or dosimetric models are to be used in objective functions for lung treatments, based on this study it is recommended that the Lyman NTCP model be used because it will provide most consistency with traditional clinician ranking.

  12. Newberry Volcano EGS Demonstration Stimulation Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trenton T. Cladouhos, Matthew Clyne, Maisie Nichols,; Susan Petty, William L. Osborn, Laura Nofziger

    2011-10-23

    As a part of Phase I of the Newberry Volcano EGS Demonstration project, several data sets were collected to characterize the rock volume around the well. Fracture, fault, stress, and seismicity data has been collected by borehole televiewer, LiDAR elevation maps, and microseismic monitoring. Well logs and cuttings from the target well (NWG 55-29) and core from a nearby core hole (USGS N-2) have been analyzed to develop geothermal, geochemical, mineralogical and strength models of the rock matrix, altered zones, and fracture fillings (see Osborn et al., this volume). These characterization data sets provide inputs to models used to planmore » and predict EGS reservoir creation and productivity. One model used is AltaStim, a stochastic fracture and flow software model developed by AltaRock. The software's purpose is to model and visualize EGS stimulation scenarios and provide guidance for final planning. The process of creating an AltaStim model requires synthesis of geologic observations at the well, the modeled stress conditions, and the stimulation plan. Any geomechanical model of an EGS stimulation will require many assumptions and unknowns; thus, the model developed here should not be considered a definitive prediction, but a plausible outcome given reasonable assumptions. AltaStim is a tool for understanding the effect of known constraints, assumptions, and conceptual models on plausible outcomes.« less

  13. Concerted and mosaic evolution of functional modules in songbird brains

    PubMed Central

    DeVoogd, Timothy J.

    2017-01-01

    Vertebrate brains differ in overall size, composition and functional capacities, but the evolutionary processes linking these traits are unclear. Two leading models offer opposing views: the concerted model ascribes major dimensions of covariation in brain structures to developmental events, whereas the mosaic model relates divergent structures to functional capabilities. The models are often cast as incompatible, but they must be unified to explain how adaptive changes in brain structure arise from pre-existing architectures and developmental mechanisms. Here we show that variation in the sizes of discrete neural systems in songbirds, a species-rich group exhibiting diverse behavioural and ecological specializations, supports major elements of both models. In accordance with the concerted model, most variation in nucleus volumes is shared across functional domains and allometry is related to developmental sequence. Per the mosaic model, residual variation in nucleus volumes is correlated within functional systems and predicts specific behavioural capabilities. These comparisons indicate that oscine brains evolved primarily as a coordinated whole but also experienced significant, independent modifications to dedicated systems from specific selection pressures. Finally, patterns of covariation between species and brain areas hint at underlying developmental mechanisms. PMID:28490627

  14. Optimum systems design with random input and output applied to solar water heating

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, L. L.

    1980-03-01

    Solar water heating systems are evaluated. Models were developed to estimate the percentage of energy supplied from the Sun to a household. Since solar water heating systems have random input and output queueing theory, birth and death processes were the major tools in developing the models of evaluation. Microeconomics methods help in determining the optimum size of the solar water heating system design parameters, i.e., the water tank volume and the collector area.

  15. A combined Eulerian-volume of fraction-Lagrangian method for atomization simulation

    NASA Technical Reports Server (NTRS)

    Seung, S. P.; Chen, C. P.; Ziebarth, John P.

    1994-01-01

    The tracking of free surfaces between liquid and gas phases and analysis of the interfacial phenomena between the two during the atomization and breakup process of a liquid fuel jet is modeled. Numerical modeling of liquid-jet atomization requires the resolution of different conservation equations. Detailed formulation and validation are presented for the confined dam broken problem, the water surface problem, the single droplet problem, a jet breakup problem, and the liquid column instability problem.

  16. Cluster kinetics model for mixtures of glassformers

    NASA Astrophysics Data System (ADS)

    Brenskelle, Lisa A.; McCoy, Benjamin J.

    2007-10-01

    For glassformers we propose a binary mixture relation for parameters in a cluster kinetics model previously shown to represent pure compound data for viscosity and dielectric relaxation as functions of either temperature or pressure. The model parameters are based on activation energies and activation volumes for cluster association-dissociation processes. With the mixture parameters, we calculated dielectric relaxation times and compared the results to experimental values for binary mixtures. Mixtures of sorbitol and glycerol (seven compositions), sorbitol and xylitol (three compositions), and polychloroepihydrin and polyvinylmethylether (three compositions) were studied.

  17. Documentation of the GLAS fourth order general circulation model. Volume 1: Model documentation

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Balgovind, R.; Chao, W.; Edelmann, J.; Pfaendtner, J.; Takacs, L.; Takano, K.

    1983-01-01

    The volume 1, of a 3 volume technical memoranda which contains a documentation of the GLAS Fourth Order General Circulation Model is presented. Volume 1 contains the documentation, description of the stratospheric/tropospheric extension, user's guide, climatological boundary data, and some climate simulation studies.

  18. Large-amplitude jumps and non-Gaussian dynamics in highly concentrated hard sphere fluids.

    PubMed

    Saltzman, Erica J; Schweizer, Kenneth S

    2008-05-01

    Our microscopic stochastic nonlinear Langevin equation theory of activated dynamics has been employed to study the real-space van Hove function of dense hard sphere fluids and suspensions. At very short times, the van Hove function is a narrow Gaussian. At sufficiently high volume fractions, such that the entropic barrier to relaxation is greater than the thermal energy, its functional form evolves with time to include a rapidly decaying component at small displacements and a long-range exponential tail. The "jump" or decay length scale associated with the tail increases with time (or particle root-mean-square displacement) at fixed volume fraction, and with volume fraction at the mean alpha relaxation time. The jump length at the alpha relaxation time is predicted to be proportional to a measure of the decoupling of self-diffusion and structural relaxation. At long times corresponding to mean displacements of order a particle diameter, the volume fraction dependence of the decay length disappears. A good superposition of the exponential tail feature based on the jump length as a scaling variable is predicted at high volume fractions. Overall, the theoretical results are in good accord with recent simulations and experiments. The basic aspects of the theory are also compared with a classic jump model and a dynamically facilitated continuous time random-walk model. Decoupling of the time scales of different parts of the relaxation process predicted by the theory is qualitatively similar to facilitated dynamics models based on the concept of persistence and exchange times if the elementary event is assumed to be associated with transport on a length scale significantly smaller than the particle size.

  19. Critical Void Volume Fraction fc at Void Coalescence for S235JR Steel at Low Initial Stress Triaxiality

    NASA Astrophysics Data System (ADS)

    Grzegorz Kossakowski, Paweł; Wciślik, Wiktor

    2017-10-01

    The paper is concerned with the nucleation, growth and coalescence of microdefects in the form of voids in S235JR steel. The material is known to be one of the basic steel grades commonly used in the construction industry. The theory and methods of damage mechanics were applied to determine and describe the failure mechanisms that occur when the material undergoes deformation. Until now, engineers have generally employed the Gurson-Tvergaard- Needleman model. This material model based on damage mechanics is well suited to define and analyze failure processes taking place in the microstructure of S235JR steel. It is particularly important to determine the critical void volume fraction fc , which is one of the basic parameters of the Gurson-Tvergaard-Needleman material model. As the critical void volume fraction fc refers to the failure stage, it is determined from the data collected for the void coalescence phase. A case of multi-axial stresses is considered taking into account the effects of spatial stress state. In this study, the parameter of stress triaxiality η was used to describe the failure phenomena. Cylindrical tensile specimens with a circumferential notch were analysed to obtain low values of initial stress triaxiality (η = 0.556 of the range) in order to determine the critical void volume fraction fc . It is essential to emphasize how unique the method applied is and how different it is from the other more common methods involving parameter calibration, i.e. curve-fitting methods. The critical void volume fraction fc at void coalescence was established through digital image analysis of surfaces of S235JR steel, which involved studying real, physical results obtained directly from the material tested.

  20. The BREAST-V: a unifying predictive formula for volume assessment in small, medium, and large breasts.

    PubMed

    Longo, Benedetto; Farcomeni, Alessio; Ferri, Germano; Campanale, Antonella; Sorotos, Micheal; Santanelli, Fabio

    2013-07-01

    Breast volume assessment enhances preoperative planning of both aesthetic and reconstructive procedures, helping the surgeon in the decision-making process of shaping the breast. Numerous methods of breast size determination are currently reported but are limited by methodologic flaws and variable estimations. The authors aimed to develop a unifying predictive formula for volume assessment in small to large breasts based on anthropomorphic values. Ten anthropomorphic breast measurements and direct volumes of 108 mastectomy specimens from 88 women were collected prospectively. The authors performed a multivariate regression to build the optimal model for development of the predictive formula. The final model was then internally validated. A previously published formula was used as a reference. Mean (±SD) breast weight was 527.9 ± 227.6 g (range, 150 to 1250 g). After model selection, sternal notch-to-nipple, inframammary fold-to-nipple, and inframammary fold-to-fold projection distances emerged as the most important predictors. The resulting formula (the BREAST-V) showed an adjusted R of 0.73. The estimated expected absolute error on new breasts is 89.7 g (95 percent CI, 62.4 to 119.1 g) and the expected relative error is 18.4 percent (95 percent CI, 12.9 to 24.3 percent). Application of reference formula on the sample yielded worse predictions than those derived by the formula, showing an R of 0.55. The BREAST-V is a reliable tool for predicting small to large breast volumes accurately for use as a complementary device in surgeon evaluation. An app entitled BREAST-V for both iOS and Android devices is currently available for free download in the Apple App Store and Google Play Store. Diagnostic, II.

  1. Intereruptive deformation at Three Sisters volcano, Oregon, USA: a strategy for traking volume changes through coupled hydraulic-viscoelastic modeling

    NASA Astrophysics Data System (ADS)

    Charco, M.; Rodriguez Molina, S.; Gonzalez, P. J.; Negredo, A. M.; Poland, M. P.; Schmidt, D. A.

    2017-12-01

    The Three Sisters volcanic region Oregon (USA) is one of the most active volcanic areas in the Cascade Range and is densely populated with eruptive vents. An extensive area just west of South Sister volcano has been actively uplifting since about 1998. InSAR data from 1992 through 2001 showed an uplift rate in the area of 3-4 cm/yr. Then the deformation rate considerably decreased between 2004 and 2006 as shown by both InSAR and continuous GPS measurements. Once magmatic system geometry and location are determined, a linear inversion of all GPS and InSAR data available is performed in order to estimate the volume changes of the source along the analyzed time interval. For doing so, we applied a technique based on the Truncated Singular Value Decomposition (TSVD) of the Green's function matrix representing the linear inversion. Here, we develop a strategy to provide a cut-off for truncation removing the smallest singular values without too much loose of data resolution against the stability of the method. Furthermore, the strategy will give us a quantification of the uncertainty of the volume change time series. The strength of the methodology resides in allowing the joint inversion of InSAR measurements from multiple tracks with different look angles and three component GPS measurements from multiple sites.Finally, we analyze the temporal behavior of the source volume changes using a new analytical model that describes the process of injecting magma into a reservoir surrounded by a viscoelastic shell. This dynamic model is based on Hagen-Poiseuille flow through a vertical conduit that leads to an increase in pressure within a spherical reservoir and time-dependent surface deformation. The volume time series are compared to predictions from the dynamic model to constrain model parameters, namely characteristic Poiseuille and Maxwell time scales, inlet and outlet injection pressure, and source and shell geometries. The modeling approach used here could be used to develop a mathematically rigorous strategy for including time-series deformation data in the interpretation of volcanic unrest.

  2. The capability of radial basis function to forecast the volume fractions of the annular three-phase flow of gas-oil-water.

    PubMed

    Roshani, G H; Karami, A; Salehizadeh, A; Nazemi, E

    2017-11-01

    The problem of how to precisely measure the volume fractions of oil-gas-water mixtures in a pipeline remains as one of the main challenges in the petroleum industry. This paper reports the capability of Radial Basis Function (RBF) in forecasting the volume fractions in a gas-oil-water multiphase system. Indeed, in the present research, the volume fractions in the annular three-phase flow are measured based on a dual energy metering system including the 152 Eu and 137 Cs and one NaI detector, and then modeled by a RBF model. Since the summation of volume fractions are constant (equal to 100%), therefore it is enough for the RBF model to forecast only two volume fractions. In this investigation, three RBF models are employed. The first model is used to forecast the oil and water volume fractions. The next one is utilized to forecast the water and gas volume fractions, and the last one to forecast the gas and oil volume fractions. In the next stage, the numerical data obtained from MCNP-X code must be introduced to the RBF models. Then, the average errors of these three models are calculated and compared. The model which has the least error is picked up as the best predictive model. Based on the results, the best RBF model, forecasts the oil and water volume fractions with the mean relative error of less than 0.5%, which indicates that the RBF model introduced in this study ensures an effective enough mechanism to forecast the results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A numerical model to simulate foams during devolatilization of polymers

    NASA Astrophysics Data System (ADS)

    Khan, Irfan; Dixit, Ravindra

    2014-11-01

    Customers often demand that the polymers sold in the market have low levels of volatile organic compounds (VOC). Some of the processes for making polymers involve the removal of volatiles to the levels of parts per million (devolatilization). During this step the volatiles are phase separated out of the polymer through a combination of heating and applying lower pressure, creating foam with the pure polymer in liquid phase and the volatiles in the gas phase. The efficiency of the devolatilization process depends on predicting the onset of solvent phase change in the polymer and volatiles mixture accurately based on the processing conditions. However due to the complex relationship between the polymer properties and the processing conditions this is not trivial. In this work, a bubble scale model is coupled with a bulk scale transport model to simulate the processing conditions of polymer devolatilization. The bubble scale model simulates the nucleation and bubble growth based on the classical nucleation theory and the popular ``influence volume approach.'' As such it provides the information of bubble size distribution and number density inside the polymer at any given time and position. This information is used to predict the bulk properties of the polymer and its behavior under the applied processing conditions. Initial results of this modeling approach will be presented.

  4. Modeling of Thermal Conductivity of CVI-Densified Composites at Fiber and Bundle Level

    PubMed Central

    Guan, Kang; Wu, Jianqing; Cheng, Laifei

    2016-01-01

    The evolution of the thermal conductivities of the unidirectional, 2D woven and 3D braided composites during the CVI (chemical vapor infiltration) process have been numerically studied by the finite element method. The results show that the dual-scale pores play an important role in the thermal conduction of the CVI-densified composites. According to our results, two thermal conductivity models applicable for CVI process have been developed. The sensitivity analysis demonstrates the parameter with the most influence on the CVI-densified composites’ thermal conductivity is matrix cracking’s density, followed by volume fraction of the bundle and thermal conductance of the matrix cracks, finally by micro-porosity inside the bundles and macro-porosity between the bundles. The obtained results are well consistent with the reported data, thus our models could be useful for designing the processing and performance of the CVI-densified composites. PMID:28774130

  5. Analysis of exhaled breath by laser detection

    NASA Astrophysics Data System (ADS)

    Thrall, Karla D.; Toth, James J.; Sharpe, Steven W.

    1996-04-01

    The goal of our work is two fold: (1) to develop a portable rapid laser based breath analyzer for monitoring metabolic processes, and (2) predict these metabolic processes through physiologically based pharmacokinetic (PBPK) modeling. Small infrared active molecules such as ammonia, carbon monoxide, carbon dioxide, methane and ethane are present in exhaled breath and can be readily detected by laser absorption spectroscopy. In addition, many of the stable isotopomers of these molecules can be accurately detected, making it possible to follow specific metabolic processes. Potential areas of applications for this technology include the diagnosis of certain pathologies (e.g. Helicobacter Pylori infection), detection of trauma due to either physical or chemical causes and monitoring nutrient uptake (i.e., malnutrition). In order to understand the origin and elucidate the metabolic processes associated with these small molecules, we are employing physiologically based pharmacokinetic (PBPK) models. A PBPK model is founded on known physiological processes (i.e., blood flow rates, tissue volumes, breathing rate, etc.), chemical-specific processes (i.e., tissue solubility coefficients, molecular weight, chemical density, etc.), and on metabolic processes (tissue site and rate of metabolic biotransformation). Since many of these processes are well understood, a PBPK model can be developed and validated against the more readily available experimental animal data, and then by extrapolating the parameters to apply to man, the model can predict chemical behavior in humans.

  6. A study of process parameters on workpiece anisotropy in the laser engineered net shaping (LENSTM) process

    NASA Astrophysics Data System (ADS)

    Chandra, Shubham; Rao, Balkrishna C.

    2017-06-01

    The process of laser engineered net shaping (LENSTM) is an additive manufacturing technique that employs the coaxial flow of metallic powders with a high-power laser to form a melt pool and the subsequent deposition of the specimen on a substrate. Although research done over the past decade on the LENSTM processing of alloys of steel, titanium, nickel and other metallic materials typically reports superior mechanical properties in as-deposited specimens, when compared to the bulk material, there is anisotropy in the mechanical properties of the melt deposit. The current study involves the development of a numerical model of the LENSTM process, using the principles of computational fluid dynamics (CFD), and the subsequent prediction of the volume fraction of equiaxed grains to predict process parameters required for the deposition of workpieces with isotropy in their properties. The numerical simulation is carried out on ANSYS-Fluent, whose data on thermal gradient are used to determine the volume fraction of the equiaxed grains present in the deposited specimen. This study has been validated against earlier efforts on the experimental studies of LENSTM for alloys of nickel. Besides being applicable to the wider family of metals and alloys, the results of this study will also facilitate effective process design to improve both product quality and productivity.

  7. The effect of process parameters on audible acoustic emissions from high-shear granulation.

    PubMed

    Hansuld, Erin M; Briens, Lauren; Sayani, Amyn; McCann, Joe A B

    2013-02-01

    Product quality in high-shear granulation is easily compromised by minor changes in raw material properties or process conditions. It is desired to develop a process analytical technology (PAT) that can monitor the process in real-time and provide feedback for quality control. In this work, the application of audible acoustic emissions (AAEs) as a PAT tool was investigated. A condenser microphone was placed at the top of the air exhaust on a PMA-10 high-shear granulator to collect AAEs for a design of experiment (DOE) varying impeller speed, total binder volume and spray rate. The results showed the 10 Hz total power spectral densities (TPSDs) between 20 and 250 Hz were significantly affected by the changes in process conditions. Impeller speed and spray rate were shown to have statistically significant effects on granulation wetting, and impeller speed and total binder volume were significant in terms of process end-point. The DOE results were confirmed by a multivariate PLS model of the TPSDs. The scores plot showed separation based on impeller speed in the first component and spray rate in the second component. The findings support the use of AAEs to monitor changes in process conditions in real-time and achieve consistent product quality.

  8. Overpressure generation by load transfer following shale framework weakening due to smectite diagenesis

    USGS Publications Warehouse

    Lahann, R.W.; Swarbrick, R.E.

    2011-01-01

    Basin model studies which have addressed the importance of smectite conversion to illite as a source of overpressure in the Gulf of Mexico have principally relied on a single-shale compaction model and treated the smectite reaction as only a fluid-source term. Recent fluid pressure interpretation and shale petrology studies indicate that conversion of bound water to mobile water, dissolution of load-bearing grains, and increased preferred orientation change the compaction properties of the shale. This results in substantial changes in effective stress and fluid pressure. The resulting fluid pressure can be 1500-3000psi higher than pressures interpreted from models based on shallow compaction trends. Shale diagenesis changes the mineralogy, volume, and orientation of the load-bearing grains in the shale as well as the volume of bound water. This process creates a weaker (more compactable) grain framework. When these changes occur without fluid export from the shale, some of the stress is transferred from the grains onto the fluid. Observed relationships between shale density and calculated effective stress in Gulf of Mexico shelf wells confirm these changes in shale properties with depth. Further, the density-effective stress changes cannot be explained by fluid-expansion or fluid-source processes or by prediagenesis compaction, but are consistent with a dynamic diagenetic modification of the shale mineralogy, texture, and compaction properties during burial. These findings support the incorporation of diagenetic modification of compaction properties as part of the fluid pressure interpretation process. ?? 2011 Blackwell Publishing Ltd.

  9. Two-step infiltration of aluminum melts into Al-Ti-B4C-CuO powder mixture pellets

    NASA Astrophysics Data System (ADS)

    Zhang, Jingjing; Lee, Jung-Moo; Cho, Young-Hee; Kim, Su-Hyeon; Yu, Huashun

    2016-03-01

    Aluminum matrix composites with a high volume fraction of B4C and TiB2 were fabricated by a novel processing technique - a quick spontaneous infiltration process. The process combines a pressureless infiltration with the combustion reaction of Al-Ti-B4C-CuO in molten aluminum. The process is realized in a simple and economical way in which the whole process is performed in air in a few minutes. To verify the rapidity of the process, the infiltration kinetics was calculated based on the Washburn equation in which melt flows into a porous skeleton. However, there was a noticeable deviation from the calculated results with the experimental results. Considering the cross-sections of the samples at different processing times, a new infiltration model (two step infiltration) consisting of macro-infiltration and micro-infiltration is suggested. The calculated kinetics results in light of the proposed model agree well with the experimental results.

  10. Comparison of spectroscopy technologies for improved monitoring of cell culture processes in miniature bioreactors

    PubMed Central

    van den Berg, Frans; Racher, Andrew J.; Martin, Elaine B.; Jaques, Colin

    2017-01-01

    Cell culture process development requires the screening of large numbers of cell lines and process conditions. The development of miniature bioreactor systems has increased the throughput of such studies; however, there are limitations with their use. One important constraint is the limited number of offline samples that can be taken compared to those taken for monitoring cultures in large‐scale bioreactors. The small volume of miniature bioreactor cultures (15 mL) is incompatible with the large sample volume (600 µL) required for bioanalysers routinely used. Spectroscopy technologies may be used to resolve this limitation. The purpose of this study was to compare the use of NIR, Raman, and 2D‐fluorescence to measure multiple analytes simultaneously in volumes suitable for daily monitoring of a miniature bioreactor system. A novel design‐of‐experiment approach is described that utilizes previously analyzed cell culture supernatant to assess metabolite concentrations under various conditions while providing optimal coverage of the desired design space. Multivariate data analysis techniques were used to develop predictive models. Model performance was compared to determine which technology is more suitable for this application. 2D‐fluorescence could more accurately measure ammonium concentration (RMSECV 0.031 g L−1) than Raman and NIR. Raman spectroscopy, however, was more robust at measuring lactate and glucose concentrations (RMSECV 1.11 and 0.92 g L−1, respectively) than the other two techniques. The findings suggest that Raman spectroscopy is more suited for this application than NIR and 2D‐fluorescence. The implementation of Raman spectroscopy increases at‐line measuring capabilities, enabling daily monitoring of key cell culture components within miniature bioreactor cultures. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:337–346, 2017 PMID:28271638

  11. Modelling cascading and erosional processes for glacial lake outburst floods in the Quillcay catchment, Huaraz, Cordillera Blanca, Peru

    NASA Astrophysics Data System (ADS)

    Baer, Patrick; Huggel, Christian; Frey, Holger; Chisolm, Rachel; McKinney, Daene; McArdell, Brian; Portocarrero, Cesar; Cochachin, Alejo

    2016-04-01

    Huaraz as the largest city in Cordillera Blanca has faced a major disaster in 1941, when an outburst flood from Lake Palcacocha killed several thousand people and caused widespread destruction. Recent studies on glacial lake outburst flood (GLOF) modelling and early warning systems focussed on Lake Palcacocha which has regrown after the 1941 event, from a volume of half a million m3 in 1974 to a total volume of more than 17 million m3 today. However, little research has been conducted so far concerning the situation of other lakes in the Quillcay catchment, namely Lake Tullparaju (12 mill. m3) and Cuchillacocha (2.5 mill. m3), which both also pose a threat to the city of Huaraz. In this study, we modelled the cascading processes at Lake Tullparaju and Lake Cuchillacocha including rock/ice avalanches, flood wave propagation in the lake and the resulting outburst flood and debris flows. We used the 2D model RAMMS to simulate ice avalanches. Model output was used as input for analytical 2D and 3D calculations of impact waves in the lakes that allowed us to estimate dam overtopping wave height. Since the dimension of the hanging glaciers above all three lakes is comparable, the scenarios in this study have been defined similar to the previous study at Lake Palcacocha. The flow propagation model included sediment entrainment in the steeper parts of the catchment, adding up to 50% to the initial flow volume. The results for total travel time as well as for inundated areas and flow depth and velocity in the city of Huaraz are comparable to the previous studies at Lake Palcacocha. This underlines the importance of considering also these lakes within an integral hazard analysis for the city of Huaraz. A main challenge for modelling GLOFs in the Quillcay catchment using RAMMS is the long runout distance of over 22 km combined with the very low slope gradient of the river. Further studies could improve the process understanding and could focus on more detailed investigations of the stability of the steep glaciers and rock faces, or on incorporation of bathymetric and geotechnical dam information with the application of 3D wave generation simulation. First applications of the beta version of the RAMMS erosion model for GLOF sediment entrainment are promising and could be refined with additional field studies.

  12. Biomimetic materials in the utility industry: A program plan for research opportunities, volume 2. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richman, R.H.; McNaughton, W.P.

    1996-09-01

    This report is the second of a two-volume set addressing the state-of-the-art and outlook for the application of biomimetic materials. The first volume examined achievements in mimicking novel aspects of biological systems in five broad categories: (1) Mimicking of Natural Material Designs, (2) Biomimetic Materials Processing, (3) Artificial Photosynthesis, (4) Biomimetic Molecular Electronics, and (5) Biomimetic Catalysis. Each topic was examined as to current activities and approaches, key aspects, unresolved issues, and implications for the power industry. Key researchers, their organizations, the main thrusts of investigation, achievements, and funding agencies were also summarized. This volume highlights opportunities for future researchmore » activities in biomimetics that could be valuable to the U.S. utility industry. Nineteen specific research projects have been identified. These opportunities are outlined in four classes: (1) technology awareness, (2) modeling and experimental studies, (3) state-of-the-art and outlook studies: developing experimental plans, and (4) concept feasibility studies.« less

  13. A semi-automatic method for left ventricle volume estimate: an in vivo validation study

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Lamberti, C.; Sarti, A.; Saracino, G.; Shiota, T.; Thomas, J. D.

    2001-01-01

    This study aims to the validation of the left ventricular (LV) volume estimates obtained by processing volumetric data utilizing a segmentation model based on level set technique. The validation has been performed by comparing real-time volumetric echo data (RT3DE) and magnetic resonance (MRI) data. A validation protocol has been defined. The validation protocol was applied to twenty-four estimates (range 61-467 ml) obtained from normal and pathologic subjects, which underwent both RT3DE and MRI. A statistical analysis was performed on each estimate and on clinical parameters as stroke volume (SV) and ejection fraction (EF). Assuming MRI estimates (x) as a reference, an excellent correlation was found with volume measured by utilizing the segmentation procedure (y) (y=0.89x + 13.78, r=0.98). The mean error on SV was 8 ml and the mean error on EF was 2%. This study demonstrated that the segmentation technique is reliably applicable on human hearts in clinical practice.

  14. A Vertically Lagrangian Finite-Volume Dynamical Core for Global Models

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann

    2003-01-01

    A finite-volume dynamical core with a terrain-following Lagrangian control-volume discretization is described. The vertically Lagrangian discretization reduces the dimensionality of the physical problem from three to two with the resulting dynamical system closely resembling that of the shallow water dynamical system. The 2D horizontal-to-Lagrangian-surface transport and dynamical processes are then discretized using the genuinely conservative flux-form semi-Lagrangian algorithm. Time marching is split- explicit, with large-time-step for scalar transport, and small fractional time step for the Lagrangian dynamics, which permits the accurate propagation of fast waves. A mass, momentum, and total energy conserving algorithm is developed for mapping the state variables periodically from the floating Lagrangian control-volume to an Eulerian terrain-following coordinate for dealing with physical parameterizations and to prevent severe distortion of the Lagrangian surfaces. Deterministic baroclinic wave growth tests and long-term integrations using the Held-Suarez forcing are presented. Impact of the monotonicity constraint is discussed.

  15. Assessment of the hybrid propagation model, Volume 2: Comparison with the Integrated Noise Model

    DOT National Transportation Integrated Search

    2012-08-31

    This is the second of two volumes of the report on the Hybrid Propagation Model (HPM), an advanced prediction model for aviation noise propagation. This volume presents comparisons of the HPM and the Integrated Noise Model (INM) for conditions of une...

  16. Root induced changes of effective 1D hydraulic properties in a soil column.

    PubMed

    Scholl, P; Leitner, D; Kammerer, G; Loiskandl, W; Kaul, H-P; Bodner, G

    Roots are essential drivers of soil structure and pore formation. This study aimed at quantifying root induced changes of the pore size distribution (PSD). The focus was on the extent of clogging vs. formation of pores during active root growth. Parameters of Kosugi's lognormal PSD model were determined by inverse estimation in a column experiment with two cover crops (mustard, rye) and an unplanted control. Pore dynamics were described using a convection-dispersion like pore evolution model. Rooted treatments showed a wider range of pore radii with increasing volumes of large macropores >500 μm and micropores <2.5 μm, while fine macropores, mesopores and larger micropores decreased. The non-rooted control showed narrowing of the PSD and reduced porosity over all radius classes. The pore evolution model accurately described root induced changes, while structure degradation in the non-rooted control was not captured properly. Our study demonstrated significant short term root effects with heterogenization of the pore system as dominant process of root induced structure formation. Pore clogging is suggested as a partial cause for reduced pore volume. The important change in micro- and large macropores however indicates that multiple mechanic and biochemical processes are involved in root-pore interactions.

  17. Numerical Analysis of a Paraffin/Metal Foam Composite for Thermal Storage

    NASA Astrophysics Data System (ADS)

    Di Giorgio, P.; Iasiello, M.; Viglione, A.; Mameli, M.; Filippeschi, S.; Di Marco, P.; Andreozzi, A.; Bianco, N.

    2017-01-01

    In the last decade, the use of Phase Change Materials (PCMs) as passive thermal energy storage has been widely studied both analytically and experimentally. Among the PCMs, paraffins show many advantages, such as having a high latent heat, a low vapour pressure, being chemically inert, stable and non-toxic. But, their thermal conductivity is very low with a high volume change during the melting process. An efficient way to increase their poor thermal conductivity is to couple them with open cells metallic foams. This paper deals with a theoretical analysis of paraffin melting process inside an aluminum foam. A mathematical model is developed by using the volume-averaged governing equations for the porous domain, made up by the PCM embedded into the metal foam. Non-Darcian and buoyancy effects are considered in the momentum equation, while the energy equations are modelled with the Local Thermal Non-Equilibrium (LTNE) approach. The PCM liquefaction is treated with the apparent heat capacity method and the governing equations are solved with a finite-element scheme by COMSOL Multiphysics®. A new method to calculate the coupling coefficients needed for the thermal model has been developed and the results obtained have been validated comparing them to experimental data in literature.

  18. Red Brain, Blue Brain: Evaluative Processes Differ in Democrats and Republicans

    PubMed Central

    Schreiber, Darren; Fonzo, Greg; Simmons, Alan N.; Dawes, Christopher T.; Flagan, Taru; Fowler, James H.; Paulus, Martin P.

    2013-01-01

    Liberals and conservatives exhibit different cognitive styles and converging lines of evidence suggest that biology influences differences in their political attitudes and beliefs. In particular, a recent study of young adults suggests that liberals and conservatives have significantly different brain structure, with liberals showing increased gray matter volume in the anterior cingulate cortex, and conservatives showing increased gray matter volume in the in the amygdala. Here, we explore differences in brain function in liberals and conservatives by matching publicly-available voter records to 82 subjects who performed a risk-taking task during functional imaging. Although the risk-taking behavior of Democrats (liberals) and Republicans (conservatives) did not differ, their brain activity did. Democrats showed significantly greater activity in the left insula, while Republicans showed significantly greater activity in the right amygdala. In fact, a two parameter model of partisanship based on amygdala and insula activations yields a better fitting model of partisanship than a well-established model based on parental socialization of party identification long thought to be one of the core findings of political science. These results suggest that liberals and conservatives engage different cognitive processes when they think about risk, and they support recent evidence that conservatives show greater sensitivity to threatening stimuli. PMID:23418419

  19. Structure of AA5056 after friction drilling

    NASA Astrophysics Data System (ADS)

    Eliseev, A. A.; Kalashnikova, T. A.; Fortuna, S. V.

    2017-12-01

    Here we present data on the structure of AA5056 alloy after friction drilling to unveil potentials of the process for use in model experiments on friction stir welding. Our analysis of the average size and volume content of precipitates shows that their content decreases immediately beneath the friction surface and that the structure of this zone is the same as the structure of stirring zones formed in friction stir welding. The data suggest that both processes provide similar metal structures.

  20. Theoretical Modeling of Molecular and Electron Kinetic Processes. Volume I. Theoretical Formulation of Analysis and Description of Computer Program.

    DTIC Science & Technology

    1979-01-01

    syn- thesis proceed s by ignoring unacceptable syntax or other errors , pro- tection against subsequent execution of a faulty reaction scheme can be...resulting TAPE9 . During subroutine syn thesis and reaction processing, a search is made (fo r each secondary electron collision encountered) to...program library, which can be cat- alogued and saved if any future specialized modifications (beyond the scope of the syn thesis capability of LASER

Top