On coarse projective integration for atomic deposition in amorphous systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chuang, Claire Y., E-mail: yungc@seas.upenn.edu, E-mail: meister@unm.edu, E-mail: zepedaruiz1@llnl.gov; Sinno, Talid, E-mail: talid@seas.upenn.edu; Han, Sang M., E-mail: yungc@seas.upenn.edu, E-mail: meister@unm.edu, E-mail: zepedaruiz1@llnl.gov
2015-10-07
Direct molecular dynamics simulation of atomic deposition under realistic conditions is notoriously challenging because of the wide range of time scales that must be captured. Numerous simulation approaches have been proposed to address the problem, often requiring a compromise between model fidelity, algorithmic complexity, and computational efficiency. Coarse projective integration, an example application of the “equation-free” framework, offers an attractive balance between these constraints. Here, periodically applied, short atomistic simulations are employed to compute time derivatives of slowly evolving coarse variables that are then used to numerically integrate differential equations over relatively large time intervals. A key obstacle to themore » application of this technique in realistic settings is the “lifting” operation in which a valid atomistic configuration is recreated from knowledge of the coarse variables. Using Ge deposition on amorphous SiO{sub 2} substrates as an example application, we present a scheme for lifting realistic atomistic configurations comprised of collections of Ge islands on amorphous SiO{sub 2} using only a few measures of the island size distribution. The approach is shown to provide accurate initial configurations to restart molecular dynamics simulations at arbitrary points in time, enabling the application of coarse projective integration for this morphologically complex system.« less
On Coarse Projective Integration for Atomic Deposition in Amorphous Systems
Chuang, Claire Y.; Han, Sang M.; Zepeda-Ruiz, Luis A.; ...
2015-10-02
Direct molecular dynamics simulation of atomic deposition under realistic conditions is notoriously challenging because of the wide range of timescales that must be captured. Numerous simulation approaches have been proposed to address the problem, often requiring a compromise between model fidelity, algorithmic complexity and computational efficiency. Coarse projective integration, an example application of the ‘equation-free’ framework, offers an attractive balance between these constraints. Here, periodically applied, short atomistic simulations are employed to compute gradients of slowly-evolving coarse variables that are then used to numerically integrate differential equations over relatively large time intervals. A key obstacle to the application of thismore » technique in realistic settings is the ‘lifting’ operation in which a valid atomistic configuration is recreated from knowledge of the coarse variables. Using Ge deposition on amorphous SiO 2 substrates as an example application, we present a scheme for lifting realistic atomistic configurations comprised of collections of Ge islands on amorphous SiO 2 using only a few measures of the island size distribution. In conclusion, the approach is shown to provide accurate initial configurations to restart molecular dynamics simulations at arbitrary points in time, enabling the application of coarse projective integration for this morphologically complex system.« less
Efficient coarse simulation of a growing avascular tumor
Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.
2013-01-01
The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, Tristram O.; Le Page, Yannick LB; Huang, Maoyi
2014-06-05
Projections of land cover change generated from Integrated Assessment Models (IAM) and other economic-based models can be applied for analyses of environmental impacts at subregional and landscape scales. For those IAM and economic models that project land use at the sub-continental or regional scale, these projections must be downscaled and spatially distributed prior to use in climate or ecosystem models. Downscaling efforts to date have been conducted at the national extent with relatively high spatial resolution (30m) and at the global extent with relatively coarse spatial resolution (0.5 degree).
NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne
2016-11-01
Coarse grid projection (CGP) methodology is a novel multigrid method for systems involving decoupled nonlinear evolution equations and linear elliptic equations. The nonlinear equations are solved on a fine grid and the linear equations are solved on a corresponding coarsened grid. Mapping functions transfer data between the two grids. Here we propose a version of CGP for incompressible flow computations using incremental pressure correction methods, called IFEi-CGP (implicit-time-integration, finite-element, incremental coarse grid projection). Incremental pressure correction schemes solve Poisson's equation for an intermediate variable and not the pressure itself. This fact contributes to IFEi-CGP's efficiency in two ways. First, IFEi-CGP preserves the velocity field accuracy even for a high level of pressure field grid coarsening and thus significant speedup is achieved. Second, because incremental schemes reduce the errors that arise from boundaries with artificial homogenous Neumann conditions, CGP generates undamped flows for simulations with velocity Dirichlet boundary conditions. Comparisons of the data accuracy and CPU times for the incremental-CGP versus non-incremental-CGP computations are presented.
Modeling disease transmission near eradication: An equation free approach
NASA Astrophysics Data System (ADS)
Williams, Matthew O.; Proctor, Joshua L.; Kutz, J. Nathan
2015-01-01
Although disease transmission in the near eradication regime is inherently stochastic, deterministic quantities such as the probability of eradication are of interest to policy makers and researchers. Rather than running large ensembles of discrete stochastic simulations over long intervals in time to compute these deterministic quantities, we create a data-driven and deterministic "coarse" model for them using the Equation Free (EF) framework. In lieu of deriving an explicit coarse model, the EF framework approximates any needed information, such as coarse time derivatives, by running short computational experiments. However, the choice of the coarse variables (i.e., the state of the coarse system) is critical if the resulting model is to be accurate. In this manuscript, we propose a set of coarse variables that result in an accurate model in the endemic and near eradication regimes, and demonstrate this on a compartmental model representing the spread of Poliomyelitis. When combined with adaptive time-stepping coarse projective integrators, this approach can yield over a factor of two speedup compared to direct simulation, and due to its lower dimensionality, could be beneficial when conducting systems level tasks such as designing eradication or monitoring campaigns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr; Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics
Using the probabilistic language of conditional expectations, we reformulate the force matching method for coarse-graining of molecular systems as a projection onto spaces of coarse observables. A practical outcome of this probabilistic description is the link of the force matching method with thermodynamic integration. This connection provides a way to systematically construct a local mean force and to optimally approximate the potential of mean force through force matching. We introduce a generalized force matching condition for the local mean force in the sense that allows the approximation of the potential of mean force under both linear and non-linear coarse grainingmore » mappings (e.g., reaction coordinates, end-to-end length of chains). Furthermore, we study the equivalence of force matching with relative entropy minimization which we derive for general non-linear coarse graining maps. We present in detail the generalized force matching condition through applications to specific examples in molecular systems.« less
Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.
Zhang, Man; Wang, Guanyong; Zhang, Lei
2017-10-26
Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.
Quantum theory of multiscale coarse-graining.
Han, Yining; Jin, Jaehyeok; Wagner, Jacob W; Voth, Gregory A
2018-03-14
Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.
Sleeter, Benjamin M.; Sohl, Terry L.; Bouchard, Michelle A.; Reker, Ryan R.; Soulard, Christopher E.; Acevedo, William; Griffith, Glenn E.; Sleeter, Rachel R.; Auch, Roger F.; Sayler, Kristi L.; Prisley, Stephen; Zhu, Zhi-Liang
2012-01-01
Global environmental change scenarios have typically provided projections of land use and land cover for a relatively small number of regions or using a relatively coarse resolution spatial grid, and for only a few major sectors. The coarseness of global projections, in both spatial and thematic dimensions, often limits their direct utility at scales useful for environmental management. This paper describes methods to downscale projections of land-use and land-cover change from the Intergovernmental Panel on Climate Change's Special Report on Emission Scenarios to ecological regions of the conterminous United States, using an integrated assessment model, land-use histories, and expert knowledge. Downscaled projections span a wide range of future potential conditions across sixteen land use/land cover sectors and 84 ecological regions, and are logically consistent with both historical measurements and SRES characteristics. Results appear to provide a credible solution for connecting regionalized projections of land use and land cover with existing downscaled climate scenarios, under a common set of scenario-based socioeconomic assumptions.
NASA Astrophysics Data System (ADS)
Izvekov, Sergei
2017-03-01
We consider the generalized Langevin equations of motion describing exactly the particle-based coarse-grained dynamics in the classical microscopic ensemble that were derived recently within the Mori-Zwanzig formalism based on new projection operators [S. Izvekov, J. Chem. Phys. 138(13), 134106 (2013)]. The fundamental difference between the new family of projection operators and the standard Zwanzig projection operator used in the past to derive the coarse-grained equations of motion is that the new operators average out the explicit irrelevant trajectories leading to the possibility of solving the projected dynamics exactly. We clarify the definition of the projection operators and revisit the formalism to compute the projected dynamics exactly for the microscopic system in equilibrium. The resulting expression for the projected force is in the form of a "generalized additive fluctuating force" describing the departure of the generalized microscopic force associated with the coarse-grained coordinate from its projection. Starting with this key expression, we formulate a new exact formula for the memory function in terms of microscopic and coarse-grained conservative forces. We conclude by studying two independent limiting cases of practical importance: the Markov limit (vanishing correlations of projected force) and the limit of weak dependence of the memory function on the particle momenta. We present computationally affordable expressions which can be efficiently evaluated from standard molecular dynamics simulations.
Miller, Thomas F.
2017-01-01
We present a coarse-grained simulation model that is capable of simulating the minute-timescale dynamics of protein translocation and membrane integration via the Sec translocon, while retaining sufficient chemical and structural detail to capture many of the sequence-specific interactions that drive these processes. The model includes accurate geometric representations of the ribosome and Sec translocon, obtained directly from experimental structures, and interactions parameterized from nearly 200 μs of residue-based coarse-grained molecular dynamics simulations. A protocol for mapping amino-acid sequences to coarse-grained beads enables the direct simulation of trajectories for the co-translational insertion of arbitrary polypeptide sequences into the Sec translocon. The model reproduces experimentally observed features of membrane protein integration, including the efficiency with which polypeptide domains integrate into the membrane, the variation in integration efficiency upon single amino-acid mutations, and the orientation of transmembrane domains. The central advantage of the model is that it connects sequence-level protein features to biological observables and timescales, enabling direct simulation for the mechanistic analysis of co-translational integration and for the engineering of membrane proteins with enhanced membrane integration efficiency. PMID:28328943
DOT National Transportation Integrated Search
2015-02-01
Material segregation in asphalt mixtures is a non-uniform distribution of coarse : and fine aggregates through its masses, i.e., concentration of coarse materials : in some area and fine materials in others. During construction, the coarse and : fine...
Coarse-Grained Models for Automated Fragmentation and Parametrization of Molecular Databases.
Fraaije, Johannes G E M; van Male, Jan; Becherer, Paul; Serral Gracià, Rubèn
2016-12-27
We calibrate coarse-grained interaction potentials suitable for screening large data sets in top-down fashion. Three new algorithms are introduced: (i) automated decomposition of molecules into coarse-grained units (fragmentation); (ii) Coarse-Grained Reference Interaction Site Model-Hypernetted Chain (CG RISM-HNC) as an intermediate proxy for dissipative particle dynamics (DPD); and (iii) a simple top-down coarse-grained interaction potential/model based on activity coefficient theories from engineering (using COSMO-RS). We find that the fragment distribution follows Zipf and Heaps scaling laws. The accuracy in Gibbs energy of mixing calculations is a few tenths of a kilocalorie per mole. As a final proof of principle, we use full coarse-grained sampling through DPD thermodynamics integration to calculate log P OW for 4627 compounds with an average error of 0.84 log unit. The computational speeds per calculation are a few seconds for CG RISM-HNC and a few minutes for DPD thermodynamic integration.
A coarse-grid-projection acceleration method for finite-element incompressible flow computations
NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne; FiN Lab Team
2015-11-01
Coarse grid projection (CGP) methodology provides a framework for accelerating computations by performing some part of the computation on a coarsened grid. We apply the CGP to pressure projection methods for finite element-based incompressible flow simulations. Based on it, the predicted velocity field data is restricted to a coarsened grid, the pressure is determined by solving the Poisson equation on the coarse grid, and the resulting data are prolonged to the preset fine grid. The contributions of the CGP method to the pressure correction technique are twofold: first, it substantially lessens the computational cost devoted to the Poisson equation, which is the most time-consuming part of the simulation process. Second, it preserves the accuracy of the velocity field. The velocity and pressure spaces are approximated by Galerkin spectral element using piecewise linear basis functions. A restriction operator is designed so that fine data are directly injected into the coarse grid. The Laplacian and divergence matrices are driven by taking inner products of coarse grid shape functions. Linear interpolation is implemented to construct a prolongation operator. A study of the data accuracy and the CPU time for the CGP-based versus non-CGP computations is presented. Laboratory for Fluid Dynamics in Nature.
Optimal Parameter Design of Coarse Alignment for Fiber Optic Gyro Inertial Navigation System.
Lu, Baofeng; Wang, Qiuying; Yu, Chunmei; Gao, Wei
2015-06-25
Two different coarse alignment algorithms for Fiber Optic Gyro (FOG) Inertial Navigation System (INS) based on inertial reference frame are discussed in this paper. Both of them are based on gravity vector integration, therefore, the performance of these algorithms is determined by integration time. In previous works, integration time is selected by experience. In order to give a criterion for the selection process, and make the selection of the integration time more accurate, optimal parameter design of these algorithms for FOG INS is performed in this paper. The design process is accomplished based on the analysis of the error characteristics of these two coarse alignment algorithms. Moreover, this analysis and optimal parameter design allow us to make an adequate selection of the most accurate algorithm for FOG INS according to the actual operational conditions. The analysis and simulation results show that the parameter provided by this work is the optimal value, and indicate that in different operational conditions, the coarse alignment algorithms adopted for FOG INS are different in order to achieve better performance. Lastly, the experiment results validate the effectiveness of the proposed algorithm.
2006-10-01
The objective was to construct a bridge between existing and future microscopic simulation codes ( kMC , MD, MC, BD, LB etc.) and traditional, continuum...kinetic Monte Carlo, kMC , equilibrium MC, Lattice-Boltzmann, LB, Brownian Dynamics, BD, or general agent-based, AB) simulators. It also, fortuitously...cond-mat/0310460 at arXiv.org. 27. Coarse Projective kMC Integration: Forward/Reverse Initial and Boundary Value Problems", R. Rico-Martinez, C. W
Local-feature analysis for automated coarse-graining of bulk-polymer molecular dynamics simulations.
Xue, Y; Ludovice, P J; Grover, M A
2012-12-01
A method for automated coarse-graining of bulk polymers is presented, using the data-mining tool of local feature analysis. Most existing methods for polymer coarse-graining define superatoms based on their covalent bonding topology along the polymer backbone, but here superatoms are defined based only on their correlated motions, as observed in molecular dynamics simulations. Correlated atomic motions are identified in the simulation data using local feature analysis, between atoms in the same or in different polymer chains. Groups of highly correlated atoms constitute the superatoms in the coarse-graining scheme, and the positions of their seed coordinates are then projected forward in time. Based on only the seed positions, local feature analysis enables the full reconstruction of all atomic positions. This reconstruction suggests an iterative scheme to reduce the computation of the simulations to initialize another short molecular dynamic simulation, identify new superatoms, and again project forward in time.
Integration of XNAT/PACS, DICOM, and Research Software for Automated Multi-modal Image Analysis.
Gao, Yurui; Burns, Scott S; Lauzon, Carolyn B; Fong, Andrew E; James, Terry A; Lubar, Joel F; Thatcher, Robert W; Twillie, David A; Wirt, Michael D; Zola, Marc A; Logan, Bret W; Anderson, Adam W; Landman, Bennett A
2013-03-29
Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.
Integration of XNAT/PACS, DICOM, and research software for automated multi-modal image analysis
NASA Astrophysics Data System (ADS)
Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.
2013-03-01
Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.
Integration of XNAT/PACS, DICOM, and Research Software for Automated Multi-modal Image Analysis
Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.
2013-01-01
Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software. PMID:24386548
COARSEMAP: synthesis of observations and models for coarse-mode aerosols
NASA Astrophysics Data System (ADS)
Wiedinmyer, C.; Lihavainen, H.; Mahowald, N. M.; Alastuey, A.; Albani, S.; Artaxo, P.; Bergametti, G.; Batterman, S.; Brahney, J.; Duce, R. A.; Feng, Y.; Buck, C.; Ginoux, P. A.; Chen, Y.; Guieu, C.; Cohen, D.; Hand, J. L.; Harrison, R. M.; Herut, B.; Ito, A.; Losno, R.; Gomez, D.; Kanakidou, M.; Landing, W. M.; Laurent, B.; Mihalopoulos, N.; Mackey, K.; Maenhaut, W.; Hueglin, C.; Milando, C.; Miller, R. L.; Myriokefaitakis, S.; Neff, J. C.; Pandolfi, M.; Paytan, A.; Perez Garcia-Pando, C.; Prank, M.; Prospero, J. M.; Tamburo, E.; Varrica, D.; Wong, M.; Zhang, Y.
2017-12-01
Coarse mode aerosols influence Earth's climate and biogeochemistry by interacting with long-wave radiation, promoting ice nucleation, and contributing important elements to biogeochemical cycles during deposition. Yet coarse mode aerosols have received less emphasis in the scientific literature. Here we present first efforts to globally synthesize available mass concentration, composition and optical depth data and modeling for the coarse mode aerosols (<10 µm) in a new project called "COARSEMAP" (http://www.geo.cornell.edu/eas/PeoplePlaces/Faculty/mahowald/COARSEMAP/). We seek more collaborators who have observational data, especially including elemental or composition data, and/or who are interested in detailed modeling of the coarse mode. The goal will be publications synthesizing data with models, as well as providing synthesized results to the wider community.
The influence of wetting dynamics on the residual air distribution
NASA Astrophysics Data System (ADS)
Sacha, J.; Snehota, M.; Trtik, P.; Vontobel, P.
2016-12-01
The amount and distribution of the residual air during the infiltration into a porous soil system has a strong influence on the infiltration rate. Concurrently, the amount of residual air is dependent on the wetting dynamics. In the presented study, two experiments were conducted on the same sample. The first experiment was performed under the constant water level condition (CWL) and the second under the constant water flux condition (CWF) at the top of the sample. The sample that composed of coarse and medium coarse fractions of sand and fine porous ceramics was packed into the quartz glass columns of the inner diameter of 29 mm. The coarse sand represented a highly conductive region connected from the top to the bottom of the sample with the exception of three low (2-3 mm) separation layers made up of the medium coarse sand. Three discs of fine ceramic formed slow flow regions. Infiltration experiments were monitored by neutron radiography on two different beamlines to produce two-dimensional (2D) projections. The CWL experiment was monitored by NEUTRA station with an acquisition time of 16 seconds per projection and the CWF experiment was visualized at BOA station with an acquisition time of 0.25 seconds per projection. Both stations are located at the Paul Scherrer Institut, Switzerland. The acquired radiograms of the dry sample were subtracted from all subsequent radiograms to determine the water thickness in projections. From series of corrected radiograms taken at the different angles three-dimensional (3D) image was reconstructed for steady state part of the experiment CWL and for entire experiment CWF. Then the series of 3D images mapped the wetting of the porous system over the corresponding phase of infiltration process. The results showed a faster steady state infiltration rate during the CWL. In this case, the air was mostly pushed out from the sample by moving wetting front. On the contrary, during the CWF the water infiltrated into the fine ceramics first and then into the medium coarse sand attracted by stronger capillary forces in comparison to the coarse sand. Due to this effect a significant amount of air was trapped in preferential pathways, and consequently blocking the water flow. The presence of medium coarse sand regions had a crucial impact on the water flow and amount of air trapping.
An 11-bit 200 MS/s subrange SAR ADC with low-cost integrated reference buffer
NASA Astrophysics Data System (ADS)
He, Xiuju; Gu, Xian; Li, Weitao; Jiang, Hanjun; Li, Fule; Wang, Zhihua
2017-10-01
This paper presents an 11-bit 200 MS/s subrange SAR ADC with an integrated reference buffer in 65 nm CMOS. The proposed ADC employs a 3.5-bit flash ADC for coarse conversion, and a compact timing scheme at the flash/SAR boundary to speed up the conversion. The flash decision is used to control charge compensating for the reference voltage to reduce its input-dependent fluctuation. Measurement results show that the fabricated ADC has achieved significant improvement by applying the reference charge compensation. In addition, the ADC achieves a maximum signal-to-noise-and-distortion ratio of 59.3 dB at 200 MS/s. It consumes 3.91 mW from a 1.2 V supply, including the reference buffer. Project supported by the Zhongxing Telecommunication Equipment Corporation and Beijing Microelectronics Technology Institute.
Coarse-grained hydrodynamics from correlation functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmer, Bruce
This paper will describe a formalism for using correlation functions between different grid cells as the basis for determining coarse-grained hydrodynamic equations for modeling the behavior of mesoscopic fluid systems. Configuration from a molecular dynamics simulation are projected onto basis functions representing grid cells in a continuum hydrodynamic simulation. Equilbrium correlation functions between different grid cells are evaluated from the molecular simulation and used to determine the evolution operator for the coarse-grained hydrodynamic system. The formalism is applied to some simple hydrodynamic cases to determine the feasibility of applying this to realistic nanoscale systems.
McCarty, J; Clark, A J; Copperman, J; Guenza, M G
2014-05-28
Structural and thermodynamic consistency of coarse-graining models across multiple length scales is essential for the predictive role of multi-scale modeling and molecular dynamic simulations that use mesoscale descriptions. Our approach is a coarse-grained model based on integral equation theory, which can represent polymer chains at variable levels of chemical details. The model is analytical and depends on molecular and thermodynamic parameters of the system under study, as well as on the direct correlation function in the k → 0 limit, c0. A numerical solution to the PRISM integral equations is used to determine c0, by adjusting the value of the effective hard sphere diameter, dHS, to agree with the predicted equation of state. This single quantity parameterizes the coarse-grained potential, which is used to perform mesoscale simulations that are directly compared with atomistic-level simulations of the same system. We test our coarse-graining formalism by comparing structural correlations, isothermal compressibility, equation of state, Helmholtz and Gibbs free energies, and potential energy and entropy using both united atom and coarse-grained descriptions. We find quantitative agreement between the analytical formalism for the thermodynamic properties, and the results of Molecular Dynamics simulations, independent of the chosen level of representation. In the mesoscale description, the potential energy of the soft-particle interaction becomes a free energy in the coarse-grained coordinates which preserves the excess free energy from an ideal gas across all levels of description. The structural consistency between the united-atom and mesoscale descriptions means the relative entropy between descriptions has been minimized without any variational optimization parameters. The approach is general and applicable to any polymeric system in different thermodynamic conditions.
Bhadra, Pratiti; Pal, Debnath
2017-04-01
Dynamics is integral to the function of proteins, yet the use of molecular dynamics (MD) simulation as a technique remains under-explored for molecular function inference. This is more important in the context of genomics projects where novel proteins are determined with limited evolutionary information. Recently we developed a method to match the query protein's flexible segments to infer function using a novel approach combining analysis of residue fluctuation-graphs and auto-correlation vectors derived from coarse-grained (CG) MD trajectory. The method was validated on a diverse dataset with sequence identity between proteins as low as 3%, with high function-recall rates. Here we share its implementation as a publicly accessible web service, named DynFunc (Dynamics Match for Function) to query protein function from ≥1 µs long CG dynamics trajectory information of protein subunits. Users are provided with the custom-developed coarse-grained molecular mechanics (CGMM) forcefield to generate the MD trajectories for their protein of interest. On upload of trajectory information, the DynFunc web server identifies specific flexible regions of the protein linked to putative molecular function. Our unique application does not use evolutionary information to infer molecular function from MD information and can, therefore, work for all proteins, including moonlighting and the novel ones, whenever structural information is available. Our pipeline is expected to be of utility to all structural biologists working with novel proteins and interested in moonlighting functions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Controls on patterns of coarse organic particle retention in headwater streams
E. N. Jack Brookshire; Kathleen A. Dwire
2003-01-01
Organic matter retention is an integral ecosystem process affecting C and nutrient dynamics and biota in streams. Influences of discharge (Q), reach-scale channel form, and riparian vegetation on coarse particulate organic matter (CPOM) retention were analyzed in 2 headwater streams in northeastern Oregon. Ginkgo biloba leaves were released in coniferous forest reaches...
Kamensky, David; Evans, John A; Hsu, Ming-Chen; Bazilevs, Yuri
2017-11-01
This paper discusses a method of stabilizing Lagrange multiplier fields used to couple thin immersed shell structures and surrounding fluids. The method retains essential conservation properties by stabilizing only the portion of the constraint orthogonal to a coarse multiplier space. This stabilization can easily be applied within iterative methods or semi-implicit time integrators that avoid directly solving a saddle point problem for the Lagrange multiplier field. Heart valve simulations demonstrate applicability of the proposed method to 3D unsteady simulations. An appendix sketches the relation between the proposed method and a high-order-accurate approach for simpler model problems.
Shen, Lin; Yang, Weitao
2016-04-12
We developed a new multiresolution method that spans three levels of resolution with quantum mechanical, atomistic molecular mechanical, and coarse-grained models. The resolution-adapted all-atom and coarse-grained water model, in which an all-atom structural description of the entire system is maintained during the simulations, is combined with the ab initio quantum mechanics and molecular mechanics method. We apply this model to calculate the redox potentials of the aqueous ruthenium and iron complexes by using the fractional number of electrons approach and thermodynamic integration simulations. The redox potentials are recovered in excellent accordance with the experimental data. The speed-up of the hybrid all-atom and coarse-grained water model renders it computationally more attractive. The accuracy depends on the hybrid all-atom and coarse-grained water model used in the combined quantum mechanical and molecular mechanical method. We have used another multiresolution model, in which an atomic-level layer of water molecules around redox center is solvated in supramolecular coarse-grained waters for the redox potential calculations. Compared with the experimental data, this alternative multilayer model leads to less accurate results when used with the coarse-grained polarizable MARTINI water or big multipole water model for the coarse-grained layer.
NASA Technical Reports Server (NTRS)
Mccollum, Bruce; Graves, Mark
1994-01-01
The International Ultraviolet Explorer (IUE) satellite observatory has been in operation continuously since 1978. It typically carries out several thousand observations per year for over a hundred different science projects. These observations, which can occur in one of four different data-taking modes, fall under several satellite-related constraints and many other constraints which derive from the science goals of the projects being undertaken. One strategy which has made the scheduling problem tractable has been that of 'coarse-graining' the time into discrete blocks of equal size (8 hours), each of which is devoted to a single science program, and each of which is sufficiently long for several observations to be carried out. We call it 'coarse-graining' because the schedule is done at a 'coarse' level which ignores fine structure; i.e., no attempt is made to plan the sequence of observations occurring within each time block. We have incorporated the IUE's coarse-grained approach in new software which examines the science needs of the observations and produces a limited set of alternative schedules which meet all of the instrument and science-related constraints. With this algorithm, the IUE can still be scheduled by a single person using a standard workstation, as it has been. We believe that this software could could be adapted to a more complex mission while retaining the IUE's high flexibility and efficiency and scientific return of future satellite missions.
The scientific foundation of the LANDFIRE Prototype Project [Chapter 3
Robert E. Keane; Matthew Rollins
2006-01-01
The Landscape Fire and Resource Management Planning Tools Prototype Project, or LANDFIRE Prototype Project, originated from a recent mapping project that developed a set of coarse-scale spatial data layers for wildland fire management describing fire hazard and ecological status for the conterminous United States (Hardy and others 2001; Schmidt and others 2002; www. fs...
NASA Astrophysics Data System (ADS)
Li, Zhen; Bian, Xin; Yang, Xiu; Karniadakis, George Em
2016-07-01
We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of a star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamics based on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models cannot be selected arbitrarily. If the free parameters are properly defined, the reverse CG procedure also yields an accurate effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces the many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.
Li, Zhen; Bian, Xin; Yang, Xiu; Karniadakis, George Em
2016-07-28
We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of a star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamics based on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models cannot be selected arbitrarily. If the free parameters are properly defined, the reverse CG procedure also yields an accurate effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces the many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zhen; Bian, Xin; Yang, Xiu
We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamics basedmore » on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models are not interchangeable. If the free parameters are properly selected, the reverse CG procedure also yields an effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zhen; Bian, Xin; Karniadakis, George Em, E-mail: george-karniadakis@brown.edu
We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of a star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamicsmore » based on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models cannot be selected arbitrarily. If the free parameters are properly defined, the reverse CG procedure also yields an accurate effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces the many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.« less
Multi-site field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 (PM10 2.5) in ambient air. The field studies involved the use of both time-integrated filter-based and direct continuous methods. Despite operationa...
NASA Astrophysics Data System (ADS)
Kawaguchi, Kazutomo; Nakagawa, Satoshi; Kurniawan, Isman; Kodama, Koichi; Arwansyah, Muhammad Saleh; Nagao, Hidemi
2018-03-01
We present a simple coarse-grained model of the effective interaction for charged amino acid residues, such as Glu and Lys, in a water solvent. The free-energy profile as a function of the distance between two charged amino acid side-chain analogues in an explicit water solvent is calculated with all-atom molecular dynamics simulation and thermodynamic integration method. The calculated free-energy profile is applied to the coarse-grained potential of the effective interaction between two amino acid residues. The Langevin dynamics simulations with our coarse-grained potential are performed for association of a small protein complex, GCN4-pLI tetramer. The tetramer conformation reproduced by our coarse-grained model is similar to the X-ray crystallographic structure. We show that the effective interaction between charged amino acid residues stabilises association and orientation of protein complex. We also investigate the association pathways of GCN4-pLI tetramer.
Dänicke, Sven; Beineke, Andreas; Berk, Andreas; Kersten, Susanne
2017-01-01
The common feed contaminant deoxynivalenol (DON) was reported to influence the morphology of the pars nonglandularis (PN) of porcine stomach. Moreover, finely ground feed is known to trigger the development of ulcers and other pathologies of PN while coarsely ground feed protects from such lesions. The interactions between grinding fineness and DON contamination of feed were not examined so far. Therefore, both finely and coarsely ground feeds were tested either in the absence or presence of a DON contaminated wheat on growth performance and health of rearing piglets, including stomach integrity. DON contamination significantly reduced feed intake and serum albumin concentration with this effect being more pronounced after feeding the coarsely ground feed. Albeit at a higher level, albumin concentration was also reduced after feeding the finely ground and uncontaminated feed. Finely ground and DON-contaminated feed caused a significantly more pronounced lymphoplasmacytic infiltration both of PN and pars glandularis , partly paralleled by lymph follicle formation and detritus filled foveolae and tubes suggesting a local immune response probably triggered by epithelial lesions. It is concluded that DON contamination of feed exacerbates the adverse effects of finely ground feed on stomach mucosal integrity.
Dänicke, Sven; Beineke, Andreas; Berk, Andreas; Kersten, Susanne
2017-01-01
The common feed contaminant deoxynivalenol (DON) was reported to influence the morphology of the pars nonglandularis (PN) of porcine stomach. Moreover, finely ground feed is known to trigger the development of ulcers and other pathologies of PN while coarsely ground feed protects from such lesions. The interactions between grinding fineness and DON contamination of feed were not examined so far. Therefore, both finely and coarsely ground feeds were tested either in the absence or presence of a DON contaminated wheat on growth performance and health of rearing piglets, including stomach integrity. DON contamination significantly reduced feed intake and serum albumin concentration with this effect being more pronounced after feeding the coarsely ground feed. Albeit at a higher level, albumin concentration was also reduced after feeding the finely ground and uncontaminated feed. Finely ground and DON-contaminated feed caused a significantly more pronounced lymphoplasmacytic infiltration both of PN and pars glandularis, partly paralleled by lymph follicle formation and detritus filled foveolae and tubes suggesting a local immune response probably triggered by epithelial lesions. It is concluded that DON contamination of feed exacerbates the adverse effects of finely ground feed on stomach mucosal integrity. PMID:28045426
NASA Astrophysics Data System (ADS)
Richter, Martin; Fingerhut, Benjamin P.
2017-06-01
The description of non-Markovian effects imposed by low frequency bath modes poses a persistent challenge for path integral based approaches like the iterative quasi-adiabatic propagator path integral (iQUAPI) method. We present a novel approximate method, termed mask assisted coarse graining of influence coefficients (MACGIC)-iQUAPI, that offers appealing computational savings due to substantial reduction of considered path segments for propagation. The method relies on an efficient path segment merging procedure via an intermediate coarse grained representation of Feynman-Vernon influence coefficients that exploits physical properties of system decoherence. The MACGIC-iQUAPI method allows us to access the regime of biological significant long-time bath memory on the order of hundred propagation time steps while retaining convergence to iQUAPI results. Numerical performance is demonstrated for a set of benchmark problems that cover bath assisted long range electron transfer, the transition from coherent to incoherent dynamics in a prototypical molecular dimer and excitation energy transfer in a 24-state model of the Fenna-Matthews-Olson trimer complex where in all cases excellent agreement with numerically exact reference data is obtained.
Nonlocal equation for the superconducting gap parameter
NASA Astrophysics Data System (ADS)
Simonucci, S.; Strinati, G. Calvanese
2017-08-01
The properties are considered in detail of a nonlocal (integral) equation for the superconducting gap parameter, which is obtained by a coarse-graining procedure applied to the Bogoliubov-de Gennes (BdG) equations over the whole coupling-versus-temperature phase diagram associated with the superfluid phase. It is found that the limiting size of the coarse-graining procedure, which is dictated by the range of the kernel of this integral equation, corresponds to the size of the Cooper pairs over the whole coupling-versus-temperature phase diagram up to the critical temperature, even when Cooper pairs turn into composite bosons on the BEC side of the BCS-BEC crossover. A practical method is further implemented to solve numerically this integral equation in an efficient way, which is based on a novel algorithm for calculating the Fourier transforms. Application of this method to the case of an isolated vortex, throughout the BCS-BEC crossover and for all temperatures in the superfluid phase, helps clarifying the nature of the length scales associated with a single vortex and the kinds of details that are in practice disposed off by the coarse-graining procedure on the BdG equations.
Hybrid discrete ordinates and characteristics method for solving the linear Boltzmann equation
NASA Astrophysics Data System (ADS)
Yi, Ce
With the ability of computer hardware and software increasing rapidly, deterministic methods to solve the linear Boltzmann equation (LBE) have attracted some attention for computational applications in both the nuclear engineering and medical physics fields. Among various deterministic methods, the discrete ordinates method (SN) and the method of characteristics (MOC) are two of the most widely used methods. The SN method is the traditional approach to solve the LBE for its stability and efficiency. While the MOC has some advantages in treating complicated geometries. However, in 3-D problems requiring a dense discretization grid in phase space (i.e., a large number of spatial meshes, directions, or energy groups), both methods could suffer from the need for large amounts of memory and computation time. In our study, we developed a new hybrid algorithm by combing the two methods into one code, TITAN. The hybrid approach is specifically designed for application to problems containing low scattering regions. A new serial 3-D time-independent transport code has been developed. Under the hybrid approach, the preferred method can be applied in different regions (blocks) within the same problem model. Since the characteristics method is numerically more efficient in low scattering media, the hybrid approach uses a block-oriented characteristics solver in low scattering regions, and a block-oriented SN solver in the remainder of the physical model. In the TITAN code, a physical problem model is divided into a number of coarse meshes (blocks) in Cartesian geometry. Either the characteristics solver or the SN solver can be chosen to solve the LBE within a coarse mesh. A coarse mesh can be filled with fine meshes or characteristic rays depending on the solver assigned to the coarse mesh. Furthermore, with its object-oriented programming paradigm and layered code structure, TITAN allows different individual spatial meshing schemes and angular quadrature sets for each coarse mesh. Two quadrature types (level-symmetric and Legendre-Chebyshev quadrature) along with the ordinate splitting techniques (rectangular splitting and PN-TN splitting) are implemented. In the S N solver, we apply a memory-efficient 'front-line' style paradigm to handle the fine mesh interface fluxes. In the characteristics solver, we have developed a novel 'backward' ray-tracing approach, in which a bi-linear interpolation procedure is used on the incoming boundaries of a coarse mesh. A CPU-efficient scattering kernel is shared in both solvers within the source iteration scheme. Angular and spatial projection techniques are developed to transfer the angular fluxes on the interfaces of coarse meshes with different discretization grids. The performance of the hybrid algorithm is tested in a number of benchmark problems in both nuclear engineering and medical physics fields. Among them are the Kobayashi benchmark problems and a computational tomography (CT) device model. We also developed an extra sweep procedure with the fictitious quadrature technique to calculate angular fluxes along directions of interest. The technique is applied in a single photon emission computed tomography (SPECT) phantom model to simulate the SPECT projection images. The accuracy and efficiency of the TITAN code are demonstrated in these benchmarks along with its scalability. A modified version of the characteristics solver is integrated in the PENTRAN code and tested within the parallel engine of PENTRAN. The limitations on the hybrid algorithm are also studied.
NASA Astrophysics Data System (ADS)
Tian, Huiping; Shen, Guansheng; Liu, Weijia; Ji, Yuefeng
2013-07-01
An integrated model of photonic crystal (PC) demultiplexer that can be used to combine dense wavelength-division multiplexing (DWDM) and coarse wavelength-division multiplexing (CWDM) systems is first proposed. By applying the PC demultiplexer, dense channel spacing 0.8 nm and coarse channel spacing 20 nm are obtained at the same time. The transmission can be improved to nearly 90%, and the crosstalk can be decreased to less than -18 dB by enlarging the width of the bus waveguide. The total size of the device is 21×42 μm2. Four channels on one side of the demultiplexer can achieve DWDM in the wavelength range between 1575 and 1578 nm, and the other four channels on the other side can achieve CWDM in the wavelength range between 1490 and 1565 nm, respectively. The demonstrated demultiplexer can be applied in the future CWDM and DWDM system, and the architecture costs can be significantly reduced.
M. Boyd Edwards
2004-01-01
In 1996, a study began at Savannah River Site to investigate large-scale replicated forest areas to control coarse woody debris for integrated biodiversity objectives. Research design was a randomized complete block with four treatments replicated in four blocks, resulting in 16 plots. The treatments applied to 50-year-old loblolly pine stands were (1) control, (2)...
Coarse-grained hydrodynamics from correlation functions
NASA Astrophysics Data System (ADS)
Palmer, Bruce
2018-02-01
This paper will describe a formalism for using correlation functions between different grid cells as the basis for determining coarse-grained hydrodynamic equations for modeling the behavior of mesoscopic fluid systems. Configurations from a molecular dynamics simulation or other atomistic simulation are projected onto basis functions representing grid cells in a continuum hydrodynamic simulation. Equilibrium correlation functions between different grid cells are evaluated from the molecular simulation and used to determine the evolution operator for the coarse-grained hydrodynamic system. The formalism is demonstrated on a discrete particle simulation of diffusion with a spatially dependent diffusion coefficient. Correlation functions are calculated from the particle simulation and the spatially varying diffusion coefficient is recovered using a fitting procedure.
How does the wetting dynamics affect capillary trapping in heterogeneous soil: Neutron imaging study
NASA Astrophysics Data System (ADS)
Sacha, Jan; Snehota, Michal; Trtik, Pavel; Vontobel, Peter
2017-04-01
The wetting dynamics of the water infiltration into a porous soil system has a strong influence on the amount of entrapped air inside the soil. Simultaneously, a higher volume of entrapped air obstructs a water flow in the medium. This effect is more noticeable in soils with preferential pathways because the soil matrix has a higher capillary forces and therefore the air is accumulated in preferential pathways. In the presented study, two experiments were conducted on the same sample. The first experiment was performed under the constant water level condition (CWL) and the second experiment was carried out under the constant water flux condition (CWF) at the top of the sample. The sample was composed of coarse and medium coarse fractions of sand and fine porous ceramics. Materials were packed into the quartz glass column of the inner diameter of 29 mm. The coarse sand represented a highly conductive region connected from the top to the bottom of the sample with the exception of three thin (2-3 mm) separation layers made up of the medium coarse sand. Three discs of fine ceramics formed slow flow regions. Infiltration experiments were monitored by neutron radiography at two different beamlines to produce two-dimensional (2D) projections. The CWL experiment was monitored at NEUTRA station with an acquisition time of 16 seconds per projection and the CWF experiment was visualized at BOA station with an acquisition time of 0.25 seconds per projection. Both stations are located at the Paul Scherrer Institut, Switzerland. The acquired radiograms of the dry sample were subtracted from all subsequent radiograms to determine the water thickness in projections. From series of corrected radiograms taken at the different angles three-dimensional (3D) image was reconstructed for steady state stage of the CWL experiment and for the entire CWF experiment. Then the series of 3D images mapped the wetting of the porous system over the corresponding phase of infiltration process. The results show a higher steady state infiltration rate during the CWL experiment. In this case, the air was mostly pushed out from the sample by the moving wetting front. The infiltration rate was continuously decreasing during the infiltration up to the value of steady state infiltration rate. When the wetting front has reached the bottom of the sample the air was moving from matrix domain to preferential domain. Infiltration rate was still higher than during CWF. On the contrary, during the CWF the water infiltrated into the fine ceramics first and then into the medium coarse sand attracted by forces that were stronger in comparison to the coarse sand. Due to this effect a significant amount of air was trapped in preferential pathways, and consequently blocked the water flow primarily due to the presence of medium coarse sand regions.
Coarse coding and discourse comprehension in adults with right hemisphere brain damage
Tompkins, Connie A.; Scharp, Victoria L.; Meigh, Kimberly M.; Fassbinder, Wiltrud
2009-01-01
Background Various investigators suggest that some discourse-level comprehension difficulties in adults with right hemisphere brain damage (RHD) have a lexical-semantic basis. As words are processed, the intact right hemisphere arouses and sustains activation of a wide-ranging network of secondary or peripheral meanings and features—a phenomenon dubbed “coarse coding”. Coarse coding impairment has been postulated to underpin some prototypical RHD comprehension deficits, such as difficulties with nonliteral language interpretation, discourse integration, some kinds of inference generation, and recovery when a reinterpretation is needed. To date, however, no studies have addressed the hypothesised link between coarse coding deficit and discourse comprehension in RHD. Aims The current investigation examined whether coarse coding was related to performance on two measures of narrative comprehension in adults with RHD. Methods & Procedures Participants were 32 adults with unilateral RHD from cerebrovascular accident, and 38 adults without brain damage. Coarse coding was operationalised as poor activation of peripheral/weakly related semantic features of words. For the coarse coding assessment, participants listened to spoken sentences that ended in a concrete noun. Each sentence was followed by a spoken target phoneme string. Targets were subordinate semantic features of the sentence-final nouns that were incompatible with their dominant mental representations (e.g., “rotten” for apple). Targets were presented at two post-noun intervals. A lexical decision task was used to gauge both early activation and maintenance of activation of these weakly related semantic features. One of the narrative tasks assessed comprehension of implied main ideas and details, while the other indexed high-level inferencing and integration. Both comprehension tasks were presented auditorily. For all tasks, accuracy of performance was the dependent measure. Correlations were computed within the RHD group between both the early and late coarse coding measures and the two discourse measures. Additionally, ANCOVA and independent t-tests were used to compare both early and sustained coarse coding in subgroups of good and poor RHD comprehenders. Outcomes & Results The group with RHD was less accurate than the control group on all measures. The finding of coarse coding impairment (difficulty activating/sustaining activation of a word’s peripheral features) may appear to contradict prior evidence of RHD suppression deficit (prolonged activation for context-inappropriate meanings of words). However, the sentence contexts in this study were unbiased and thus did not provide an appropriate test of suppression function. Correlations between coarse coding and the discourse measures were small and nonsignificant. There were no differences in coarse coding between RHD comprehension subgroups on the high-level inferencing task. There was also no distinction in early coarse coding for subgroups based on comprehension of implied main ideas and details. But for these same subgroups, there was a difference in sustained coarse coding. Poorer RHD comprehenders of implied information from discourse were also poorer at maintaining activation for semantically distant features of concrete nouns. Conclusions This study provides evidence of a variant of the postulated link between coarse coding and discourse comprehension in RHD. Specifically, adults with RHD who were particularly poor at sustaining activation for peripheral semantic features of nouns were also relatively poor comprehenders of implied information from narratives. PMID:20037670
Comparison of iterative inverse coarse-graining methods
NASA Astrophysics Data System (ADS)
Rosenberger, David; Hanke, Martin; van der Vegt, Nico F. A.
2016-10-01
Deriving potentials for coarse-grained Molecular Dynamics (MD) simulations is frequently done by solving an inverse problem. Methods like Iterative Boltzmann Inversion (IBI) or Inverse Monte Carlo (IMC) have been widely used to solve this problem. The solution obtained by application of these methods guarantees a match in the radial distribution function (RDF) between the underlying fine-grained system and the derived coarse-grained system. However, these methods often fail in reproducing thermodynamic properties. To overcome this deficiency, additional thermodynamic constraints such as pressure or Kirkwood-Buff integrals (KBI) may be added to these methods. In this communication we test the ability of these methods to converge to a known solution of the inverse problem. With this goal in mind we have studied a binary mixture of two simple Lennard-Jones (LJ) fluids, in which no actual coarse-graining is performed. We further discuss whether full convergence is actually needed to achieve thermodynamic representability.
Hinckley, Daniel M.; Freeman, Gordon S.; Whitmer, Jonathan K.; de Pablo, Juan J.
2013-01-01
A new 3-Site-Per-Nucleotide coarse-grained model for DNA is presented. The model includes anisotropic potentials between bases involved in base stacking and base pair interactions that enable the description of relevant structural properties, including the major and minor grooves. In an improvement over available coarse-grained models, the correct persistence length is recovered for both ssDNA and dsDNA, allowing for simulation of non-canonical structures such as hairpins. DNA melting temperatures, measured for duplexes and hairpins by integrating over free energy surfaces generated using metadynamics simulations, are shown to be in quantitative agreement with experiment for a variety of sequences and conditions. Hybridization rate constants, calculated using forward-flux sampling, are also shown to be in good agreement with experiment. The coarse-grained model presented here is suitable for use in biological and engineering applications, including nucleosome positioning and DNA-templated engineering. PMID:24116642
Adiabatic coarse-graining and simulations of stochastic biochemical networks
Sinitsyn, N. A.; Hengartner, Nicolas; Nemenman, Ilya
2009-01-01
We propose a universal approach for analysis and fast simulations of stiff stochastic biochemical networks, which rests on elimination of fast chemical species without a loss of information about mesoscopic, non-Poissonian fluctuations of the slow ones. Our approach is similar to the Born–Oppenheimer approximation in quantum mechanics and follows from the stochastic path integral representation of the cumulant generating function of reaction events. In applications with a small number of chemical reactions, it produces analytical expressions for cumulants of chemical fluxes between the slow variables. This allows for a low-dimensional, interpretable representation and can be used for high-accuracy, low-complexity coarse-grained numerical simulations. As an example, we derive the coarse-grained description for a chain of biochemical reactions and show that the coarse-grained and the microscopic simulations agree, but the former is 3 orders of magnitude faster. PMID:19525397
Formulation of coarse integral imaging and its applications
NASA Astrophysics Data System (ADS)
Kakeya, Hideki
2008-02-01
This paper formulates the notion of coarse integral imaging and applies it to practical designs of 3D displays for the purposes of robot teleoperation and automobile HUDs. 3D display technologies are demanded in the applications where real-time and precise depth perception is required, such as teleoperation of robot manipulators and HUDs for automobiles. 3D displays for these applications, however, have not been realized so far. In the conventional 3D display technologies, the eyes are usually induced to focus on the screen, which is not suitable for the above purposes. To overcome this problem the author adopts the coarse integral imaging system, where each component lens is large enough to cover pixels dozens of times more than the number of views. The merit of this system is that it can induce the viewer's focus on the planes of various depths by generating a real image or a virtual image off the screen. This system, however, has major disadvantages in the quality of image, which is caused by aberration of lenses and discontinuity at the joints of component lenses. In this paper the author proposes practical optical designs for 3D monitors for robot teleoperation and 3D HUDs for automobiles by overcoming the problems of aberration and discontinuity of images.
High-Resolution Coarse-Grained Modeling Using Oriented Coarse-Grained Sites.
Haxton, Thomas K
2015-03-10
We introduce a method to bring nearly atomistic resolution to coarse-grained models, and we apply the method to proteins. Using a small number of coarse-grained sites (about one per eight atoms) but assigning an independent three-dimensional orientation to each site, we preferentially integrate out stiff degrees of freedom (bond lengths and angles, as well as dihedral angles in rings) that are accurately approximated by their average values, while retaining soft degrees of freedom (unconstrained dihedral angles) mostly responsible for conformational variability. We demonstrate that our scheme retains nearly atomistic resolution by mapping all experimental protein configurations in the Protein Data Bank onto coarse-grained configurations and then analytically backmapping those configurations back to all-atom configurations. This roundtrip mapping throws away all information associated with the eliminated (stiff) degrees of freedom except for their average values, which we use to construct optimal backmapping functions. Despite the 4:1 reduction in the number of degrees of freedom, we find that heavy atoms move only 0.051 Å on average during the roundtrip mapping, while hydrogens move 0.179 Å on average, an unprecedented combination of efficiency and accuracy among coarse-grained protein models. We discuss the advantages of such a high-resolution model for parametrizing effective interactions and accurately calculating observables through direct or multiscale simulations.
Hybrid silica coarse wavelength-division multiplexer transmitter optical subassembly
NASA Astrophysics Data System (ADS)
An, Jun-Ming; Zhang, Jia-Shun; Wang, Liang-Liang; Zhu, Kaiwu; Sun, Bingli; Li, Yong; Hou, Jie; Li, Jian-Guang; Wu, Yuan-Da; Wang, Yue; Yin, Xiao-Jie
2018-01-01
Based on silica arrayed waveguide grating technology, a hybrid integrated transmitter optical subassembly was developed. Four direct-modulating distributed feedback lasers and four focusing microlenses were integrated to a coarse wavelength-division multiplexer (CWDM) on a CuW substrate. The four-channel silica-on-silicon CWDM was fabricated with 1.5% refractive index difference and 20-nm wavelength spacing. The experimental results showed that the output optical power was >3 mW with 45 mA of injection current, the slope efficiency was >0.0833 W/A, and the 3-dB bandwidth was broader than 18.15 GHz. The 1-dB compress points were higher than 18 and 15.8 dBm for frequency of 10 and 18 GHz, respectively.
Residential indoor and outdoor coarse particles and associated endotoxin exposures
NASA Astrophysics Data System (ADS)
Wheeler, Amanda J.; Dobbin, Nina A.; Lyrette, Ninon; Wallace, Lance; Foto, Mark; Mallick, Ranjeeta; Kearney, Jill; Van Ryswyk, Keith; Gilbert, Nicolas L.; Harrison, Ian; Rispler, Kathleen; Héroux, Marie-Eve
2011-12-01
There is a growing body of evidence demonstrating that coarse particles (PM 10-2.5) have detrimental impacts upon health, especially for respiratory effects. There are limited data available for indoor residential exposures. Some data exist regarding the composition of this PM size fraction with emphasis on crustal elements and biological components. This study includes data from 146 homes sampled in Regina, Saskatchewan (SK) where 5-day integrated concurrent monitoring of indoor and outdoor coarse particles was conducted during the winter and summer of 2007. The coarse particle filters were subsequently analysed for endotoxin content to determine the contribution of this compound. Winter indoor geometric mean concentrations of coarse particles exceeded outdoor concentrations (3.73 μg m -3 vs 2.49 μg m -3; paired t-test p < 0.0001); however the reverse was found in summer (4.34 μg m -3 vs 8.82 μg m -3; paired t-test p < 0.0001). Linear regression indicated that winter predictors of indoor coarse particles were outdoor coarse particles, ventilation and presence of at least two or more occupants. During the summer, increased use of central air conditioning was associated with reduced coarse particles, while smoking and the presence of two or more occupants resulted in increased coarse particles. Endotoxin concentrations (EU μg -1) were lower indoors than outdoors in both seasons. Spatial variability of ambient coarse particles was assessed to determine the suitability of using a single monitoring station within a city to estimate exposure. The coefficients of variation between homes sampled simultaneously and the central monitoring station were calculated (median COV in summer = 15% and winter = 24%) and showed significant variability by week, especially during the summer months, suggesting a single site may be insufficient for characterizing exposure. Future studies should consider daily measurements per home to understand shorter term exposures and day to day variability of these pollutants.
Fernández, Miguel; Hamilton, Healy H; Kueppers, Lara M
2015-11-01
Studies that model the effect of climate change on terrestrial ecosystems often use climate projections from downscaled global climate models (GCMs). These simulations are generally too coarse to capture patterns of fine-scale climate variation, such as the sharp coastal energy and moisture gradients associated with wind-driven upwelling of cold water. Coastal upwelling may limit future increases in coastal temperatures, compromising GCMs' ability to provide realistic scenarios of future climate in these coastal ecosystems. Taking advantage of naturally occurring variability in the high-resolution historic climatic record, we developed multiple fine-scale scenarios of California climate that maintain coherent relationships between regional climate and coastal upwelling. We compared these scenarios against coarse resolution GCM projections at a regional scale to evaluate their temporal equivalency. We used these historically based scenarios to estimate potential suitable habitat for coast redwood (Sequoia sempervirens D. Don) under 'normal' combinations of temperature and precipitation, and under anomalous combinations representative of potential future climates. We found that a scenario of warmer temperature with historically normal precipitation is equivalent to climate projected by GCMs for California by 2020-2030 and that under these conditions, climatically suitable habitat for coast redwood significantly contracts at the southern end of its current range. Our results suggest that historical climate data provide a high-resolution alternative to downscaled GCM outputs for near-term ecological forecasts. This method may be particularly useful in other regions where local climate is strongly influenced by ocean-atmosphere dynamics that are not represented by coarse-scale GCMs. © 2015 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinckley, Daniel M.; Freeman, Gordon S.; Whitmer, Jonathan K.
2013-10-14
A new 3-Site-Per-Nucleotide coarse-grained model for DNA is presented. The model includes anisotropic potentials between bases involved in base stacking and base pair interactions that enable the description of relevant structural properties, including the major and minor grooves. In an improvement over available coarse-grained models, the correct persistence length is recovered for both ssDNA and dsDNA, allowing for simulation of non-canonical structures such as hairpins. DNA melting temperatures, measured for duplexes and hairpins by integrating over free energy surfaces generated using metadynamics simulations, are shown to be in quantitative agreement with experiment for a variety of sequences and conditions. Hybridizationmore » rate constants, calculated using forward-flux sampling, are also shown to be in good agreement with experiment. The coarse-grained model presented here is suitable for use in biological and engineering applications, including nucleosome positioning and DNA-templated engineering.« less
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne
2011-11-01
We present a coarse-grid projection (CGP) algorithm for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. Here, we investigate a particular CGP method for the vorticity-stream function formulation that uses the full weighting operation for mapping from fine to coarse grids, the third-order Runge-Kutta method for time stepping, and finite differences for the spatial discretization. After solving the Poisson equation on a coarsened grid, bilinear interpolation is used to obtain the fine data for consequent time stepping on the full grid. We compute several benchmark flows: the Taylor-Green vortex, a vortex pair merging, a double shear layer, decaying turbulence and the Taylor-Green vortex on a distorted grid. In all cases we use either FFT-based or V-cycle multigrid linear-cost Poisson solvers. Reducing the number of degrees of freedom of the Poisson solver by powers of two accelerates these computations while, for the first level of coarsening, retaining the same level of accuracy in the fine resolution vorticity field.
Microphysical and Optical Properties of Saharan Dust Measured during the ICE-D Aircraft Campaign
NASA Astrophysics Data System (ADS)
Ryder, Claire; Marenco, Franco; Brooke, Jennifer; Cotton, Richard; Taylor, Jonathan
2017-04-01
During August 2015, the UK FAAM BAe146 research aircraft was stationed in Cape Verde off the coast of West Africa. Measurements of Saharan dust, and ice and liquid water clouds, were taken for the ICE-D (Ice in Clouds Experiment - Dust) project - a multidisciplinary project aimed at further understanding aerosol-cloud interactions. Six flights formed part of a sub-project, AER-D, solely focussing on measurements of Saharan dust within the African dust plume. Dust loadings observed during these flights varied (aerosol optical depths of 0.2 to 1.3), as did the vertical structure of the dust, the size distributions and the optical properties. The BAe146 was fully equipped to measure size distributions covering aerosol accumulation, coarse and giant modes. Initial results of size distribution and optical properties of dust from the AER-D flights will be presented, showing that a substantial coarse mode was present, in agreement with previous airborne measurements. Optical properties of dust relating to the measured size distributions will also be presented.
Kauffmann, Louise; Chauvin, Alan; Pichat, Cédric; Peyrin, Carole
2015-10-01
According to current models of visual perception scenes are processed in terms of spatial frequencies following a predominantly coarse-to-fine processing sequence. Low spatial frequencies (LSF) reach high-order areas rapidly in order to activate plausible interpretations of the visual input. This triggers top-down facilitation that guides subsequent processing of high spatial frequencies (HSF) in lower-level areas such as the inferotemporal and occipital cortices. However, dynamic interactions underlying top-down influences on the occipital cortex have never been systematically investigated. The present fMRI study aimed to further explore the neural bases and effective connectivity underlying coarse-to-fine processing of scenes, particularly the role of the occipital cortex. We used sequences of six filtered scenes as stimuli depicting coarse-to-fine or fine-to-coarse processing of scenes. Participants performed a categorization task on these stimuli (indoor vs. outdoor). Firstly, we showed that coarse-to-fine (compared to fine-to-coarse) sequences elicited stronger activation in the inferior frontal gyrus (in the orbitofrontal cortex), the inferotemporal cortex (in the fusiform and parahippocampal gyri), and the occipital cortex (in the cuneus). Dynamic causal modeling (DCM) was then used to infer effective connectivity between these regions. DCM results revealed that coarse-to-fine processing resulted in increased connectivity from the occipital cortex to the inferior frontal gyrus and from the inferior frontal gyrus to the inferotemporal cortex. Critically, we also observed an increase in connectivity strength from the inferior frontal gyrus to the occipital cortex, suggesting that top-down influences from frontal areas may guide processing of incoming signals. The present results support current models of visual perception and refine them by emphasizing the role of the occipital cortex as a cortical site for feedback projections in the neural network underlying coarse-to-fine processing of scenes. Copyright © 2015 Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-14
... integrated water system that contains and provides the appropriate quantity of coarse substrates such as... reduces water temperature during summer and fall months. Therefore, a complex and integrated stream system... water velocities to support successful spawning. Swift (2001, p. 26) considered that only the Rialto...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinemann, Thomas, E-mail: thomas.heinemann@tu-berlin.de; Klapp, Sabine H. L., E-mail: klapp@physik.tu-berlin.de; Palczynski, Karol, E-mail: karol.palczynski@helmholtz-berlin.de
We present an approach for calculating coarse-grained angle-resolved effective pair potentials for uniaxial molecules. For integrating out the intramolecular degrees of freedom we apply umbrella sampling and steered dynamics techniques in atomistically-resolved molecular dynamics (MD) computer simulations. Throughout this study we focus on disk-like molecules such as coronene. To develop the methods we focus on integrating out the van der Waals and intramolecular interactions, while electrostatic charge contributions are neglected. The resulting coarse-grained pair potential reveals a strong temperature and angle dependence. In the next step we fit the numerical data with various Gay-Berne-like potentials to be used in moremore » efficient simulations on larger scales. The quality of the resulting coarse-grained results is evaluated by comparing their pair and many-body structure as well as some thermodynamic quantities self-consistently to the outcome of atomistic MD simulations of many-particle systems. We find that angle-resolved potentials are essential not only to accurately describe crystal structures but also for fluid systems where simple isotropic potentials start to fail already for low to moderate packing fractions. Further, in describing these states it is crucial to take into account the pronounced temperature dependence arising in selected pair configurations due to bending fluctuations.« less
Development and Application of a Three-dimensional Seismo-acoustic Coupled-mode Model
2014-09-30
of coral reef fish need to locate a reef , and sound emanating from reefs may act as a cue to guide them. Using acoustic data collected from Bahia...approximate the solution to the wave equation. RELATED PROJECTS Geoacoustic inversion in three-dimensional environments The goal of this project is...shear wave speed Under this project an laboratory measurements the compressional and shear wave speeds and attenuations in coarse and fine grained
Optical properties of aerosols at Grand Canyon National Park
NASA Astrophysics Data System (ADS)
Malm, William C.; Day, Derek E.
Visibility in the United States is expected to improve over the next few decades because of reduced emissions, especially sulfur dioxide. In the eastern United States, sulfates make up about 60-70% of aerosol extinction, while in the inner mountain west that fraction is only about 30%. In the inner mountain west, carbon aerosols make up about 35% of extinction, while coarse mass contributes between 15 and 25% depending on how absorption is estimated. Although sulfur dioxide emissions are projected to decrease, carbon emissions due to prescribed fire activity will increase by factors of 5-10, and while optical properties of sulfates have been extensively studied, similar properties of carbon and coarse particles are less well understood. The inability to conclusively apportion about 50% of the extinction budget motivated a study to examine aerosol physio-chemical-optical properties at Grand Canyon, Arizona during the months of July and August. Coarse particle mass has usually been assumed to consist primarily of wind-blown dust, with a mass-scattering efficiency between about 0.4 and 0.6 m 2 g -1. Although there were episodes where crustal material made up most of the coarse mass, on the average, organics and crustal material mass were about equal. Furthermore, about one-half of the sampling periods had coarse-mass-scattering efficiencies greater than 0.6 m 2 g -1 and at times coarse-mass-scattering efficiencies were near 1.0 m 2 g -1. It was shown that absorption by coarse- and fine-particle absorption were about equal and that both fine organic and sulfate mass-scattering efficiencies were substantially less than the nominal values of 4.0 and 3.0 m 2 g -1 that have typically been used.
Jeanne C. Chambers; Jerry R. Miller
2011-01-01
This report contains the results of a 6-year project conducted by the U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station and U.S. Environmental Protection Agency, Office of Research and Development on stream incision and meadow ecosystem degradation in the central Great Basin. The project included a coarse-scale assessment of 56 different...
Matsuoka, Takeshi; Tanaka, Shigenori; Ebina, Kuniyoshi
2014-03-01
We propose a hierarchical reduction scheme to cope with coupled rate equations that describe the dynamics of multi-time-scale photosynthetic reactions. To numerically solve nonlinear dynamical equations containing a wide temporal range of rate constants, we first study a prototypical three-variable model. Using a separation of the time scale of rate constants combined with identified slow variables as (quasi-)conserved quantities in the fast process, we achieve a coarse-graining of the dynamical equations reduced to those at a slower time scale. By iteratively employing this reduction method, the coarse-graining of broadly multi-scale dynamical equations can be performed in a hierarchical manner. We then apply this scheme to the reaction dynamics analysis of a simplified model for an illuminated photosystem II, which involves many processes of electron and excitation-energy transfers with a wide range of rate constants. We thus confirm a good agreement between the coarse-grained and fully (finely) integrated results for the population dynamics. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Choice of baseline climate data impacts projected species' responses to climate change.
Baker, David J; Hartley, Andrew J; Butchart, Stuart H M; Willis, Stephen G
2016-07-01
Climate data created from historic climate observations are integral to most assessments of potential climate change impacts, and frequently comprise the baseline period used to infer species-climate relationships. They are often also central to downscaling coarse resolution climate simulations from General Circulation Models (GCMs) to project future climate scenarios at ecologically relevant spatial scales. Uncertainty in these baseline data can be large, particularly where weather observations are sparse and climate dynamics are complex (e.g. over mountainous or coastal regions). Yet, importantly, this uncertainty is almost universally overlooked when assessing potential responses of species to climate change. Here, we assessed the importance of historic baseline climate uncertainty for projections of species' responses to future climate change. We built species distribution models (SDMs) for 895 African bird species of conservation concern, using six different climate baselines. We projected these models to two future periods (2040-2069, 2070-2099), using downscaled climate projections, and calculated species turnover and changes in species-specific climate suitability. We found that the choice of baseline climate data constituted an important source of uncertainty in projections of both species turnover and species-specific climate suitability, often comparable with, or more important than, uncertainty arising from the choice of GCM. Importantly, the relative contribution of these factors to projection uncertainty varied spatially. Moreover, when projecting SDMs to sites of biodiversity importance (Important Bird and Biodiversity Areas), these uncertainties altered site-level impacts, which could affect conservation prioritization. Our results highlight that projections of species' responses to climate change are sensitive to uncertainty in the baseline climatology. We recommend that this should be considered routinely in such analyses. © 2016 John Wiley & Sons Ltd.
Technology of focus detection for 193nm projection lithographic tool
NASA Astrophysics Data System (ADS)
Di, Chengliang; Yan, Wei; Hu, Song; Xu, Feng; Li, Jinglong
2012-10-01
With the shortening printing wavelength and increasing numerical aperture of lithographic tool, the depth of focus(DOF) sees a rapidly drop down trend, reach a scale of several hundred nanometers while the repeatable accuracy of focusing and leveling must be one-tenth of DOF, approximately several dozen nanometers. For this feature, this article first introduces several focusing technology, Obtained the advantages and disadvantages of various methods by comparing. Then get the accuracy of dual-grating focusing method through theoretical calculation. And the dual-grating focusing method based on photoelastic modulation is divided into coarse focusing and precise focusing method to analyze, establishing image processing model of coarse focusing and photoelastic modulation model of accurate focusing. Finally, focusing algorithm is simulated with MATLAB. In conclusion dual-grating focusing method shows high precision, high efficiency and non-contact measurement of the focal plane, meeting the demands of focusing in 193nm projection lithography.
Impact of low asphalt binder for coarse HMA mixes : final report.
DOT National Transportation Integrated Search
2017-06-01
Asphalt mixtures are commonly specified using volumetric controls in combination with aggregate gradation limits, like most transportation agencies, MnDOT also uses this approach. Since 2010 onward, several asphalt paving projects for MnDOT have been...
Development of surface friction guidelines for LADOTD : research project capsule.
DOT National Transportation Integrated Search
2011-02-01
The current friction guideline of the Louisiana Department of : Transportation and Development (LADOTD) for a wearing course mixture : design deals with the polished stone value (PSV) of coarse aggregate : (which is a relative British Pendulum skid-r...
NASA Astrophysics Data System (ADS)
He, Qiang; Schultz, Richard R.; Chu, Chee-Hung Henry
2008-04-01
The concept surrounding super-resolution image reconstruction is to recover a highly-resolved image from a series of low-resolution images via between-frame subpixel image registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System (UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is first built from the original video data by image registration and bi-cubic interpolation between a fixed reference frame and every additional frame. It is well known that the median filter is robust to outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm, unlike traditional approaches based on highly-computational iterative algorithms. Experimental results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both strong efficiency and robustness, as well as good visual performance. This is particularly useful for the application of super-resolution to UAS surveillance video, where real-time processing is highly desired.
Angle-resolved effective potentials for disk-shaped molecules
NASA Astrophysics Data System (ADS)
Heinemann, Thomas; Palczynski, Karol; Dzubiella, Joachim; Klapp, Sabine H. L.
2014-12-01
We present an approach for calculating coarse-grained angle-resolved effective pair potentials for uniaxial molecules. For integrating out the intramolecular degrees of freedom we apply umbrella sampling and steered dynamics techniques in atomistically-resolved molecular dynamics (MD) computer simulations. Throughout this study we focus on disk-like molecules such as coronene. To develop the methods we focus on integrating out the van der Waals and intramolecular interactions, while electrostatic charge contributions are neglected. The resulting coarse-grained pair potential reveals a strong temperature and angle dependence. In the next step we fit the numerical data with various Gay-Berne-like potentials to be used in more efficient simulations on larger scales. The quality of the resulting coarse-grained results is evaluated by comparing their pair and many-body structure as well as some thermodynamic quantities self-consistently to the outcome of atomistic MD simulations of many-particle systems. We find that angle-resolved potentials are essential not only to accurately describe crystal structures but also for fluid systems where simple isotropic potentials start to fail already for low to moderate packing fractions. Further, in describing these states it is crucial to take into account the pronounced temperature dependence arising in selected pair configurations due to bending fluctuations.
NASA Astrophysics Data System (ADS)
Christianson, D. S.; Kaufman, C. G.; Kueppers, L. M.; Harte, J.
2013-12-01
Sampling limitations and current modeling capacity justify the common use of mean temperature values in summaries of historical climate and future projections. However, a monthly mean temperature representing a 1-km2 area on the landscape is often unable to capture the climate complexity driving organismal and ecological processes. Estimates of variability in addition to mean values are more biologically meaningful and have been shown to improve projections of range shifts for certain species. Historical analyses of variance and extreme events at coarse spatial scales, as well as coarse-scale projections, show increasing temporal variability in temperature with warmer means. Few studies have considered how spatial variance changes with warming, and analysis for both temporal and spatial variability across scales is lacking. It is unclear how the spatial variability of fine-scale conditions relevant to plant and animal individuals may change given warmer coarse-scale mean values. A change in spatial variability will affect the availability of suitable habitat on the landscape and thus, will influence future species ranges. By characterizing variability across both temporal and spatial scales, we can account for potential bias in species range projections that use coarse climate data and enable improvements to current models. In this study, we use temperature data at multiple spatial and temporal scales to characterize spatial and temporal variability under a warmer climate, i.e., increased mean temperatures. Observational data from the Sierra Nevada (California, USA), experimental climate manipulation data from the eastern and western slopes of the Rocky Mountains (Colorado, USA), projected CMIP5 data for California (USA) and observed PRISM data (USA) allow us to compare characteristics of a mean-variance relationship across spatial scales ranging from sub-meter2 to 10,000 km2 and across temporal scales ranging from hours to decades. Preliminary spatial analysis at fine-spatial scales (sub-meter to 10-meter) shows greater temperature variability with warmer mean temperatures. This is inconsistent with the inherent assumption made in current species distribution models that fine-scale variability is static, implying that current projections of future species ranges may be biased -- the direction and magnitude requiring further study. While we focus our findings on the cross-scaling characteristics of temporal and spatial variability, we also compare the mean-variance relationship between 1) experimental climate manipulations and observed conditions and 2) temporal versus spatial variance, i.e., variability in a time-series at one location vs. variability across a landscape at a single time. The former informs the rich debate concerning the ability to experimentally mimic a warmer future. The latter informs space-for-time study design and analyses, as well as species persistence via a combined spatiotemporal probability of suitable future habitat.
NASA Astrophysics Data System (ADS)
Wang, J.
2006-12-01
A total of 614 sediment samples at intervals of about 1.5 m from all 5 sites of the Integrated Ocean Drilling Program (IODP) Expedition 311 on Cascadia Margin were analyzed using a Beckman Coulter LS-230 Particle Analyzer. The grain-size data were then plotted in depth and compared with other proxies of gas hydrate- occurrence such as soupy/mousse-like structures in sediments, gas hydrate concentration (Sh) derived from LWD data using Archie's relation, IR core images (infrared image) and the recovered samples of gas hydrate¨Cbearing sediments. A good relationship between the distribution of coarse grains in size of 31-63¦Ìm and 63-125¦Ìm sediments and the potential occurrence of gas hydrate was found across the entire gas hydrate stability zone. The depth distribution of grain size from the Site U1326 shows clear excursions at depths of 5-8, 21-26, 50- 123, 132-140, 167-180, 195-206 and 220-240 mbsf, which coincide with the potential occurrence of gas hydrate suggested by soupy/mousse-like structures, logging-derived gas hydrate concentrations (Sh) and the recovered samples of the gas hydrate¨Cbearing sand layers. The lithology of sediments significantly affects the formation of gas hydrate. Gas hydrate forms preferentially within relatively coarse grain-size sediments above 31 ¦Ìm. Key words: grain size of sediments, constraint, occurrence of gas hydrate, IODP 311 IODP Expedition 311 Scientists: Michael Riedel (Co-chief Scientist), Timothy S. Collett (Co-chief Scientist), Mitchell Malone (Expedition Project Manager/Staff Scientist), Gilles Gu¨¨rin, Fumio Akiba, Marie-Madeleine Blanc-Valleron, Michelle Ellis, Yoshitaka Hashimoto, Verena Heuer, Yosuke Higashi, Melanie Holland, Peter D. Jackson, Masanori Kaneko, Miriam Kastner, Ji-Hoon Kim, Hiroko Kitajima, Philip E. Long, Alberto Malinverno, Greg Myers, Leena D. Palekar, John Pohlman, Peter Schultheiss, Barbara Teichert, Marta E. Torres, Anne M. Tr¨¦hu, Jiasheng Wang, Ulrich G. Wortmann, Hideyoshi Yoshioka. Acknowledgement: This study was supported by the IODP/JOI Alliance, IODP-China 863 Project (grant 2004AA615030) and NSFC Project (grant 40472063).
Potential use and applications for reclaimed millings.
DOT National Transportation Integrated Search
2015-06-01
The purpose of this project was to provide support to PennDOT District 1-0 in the effective use of milled asphalt material. Specifically, : District 1-0 has a shortage of high-quality available coarse aggregate and has developed the innovative proced...
Manufacturing and integration of the SOFIA suspension assembly
NASA Astrophysics Data System (ADS)
Sust, Eberhard; Weis, Ulrich; Bremers, Eckhard; Schubbach, Walter
2003-02-01
The Suspension Assembly is the most complex mechanical subsystem of the SOFIA telescope, responsible for suspending and positioning the telescope in the aircraft on the sky. It is a highly integrated system comprising of a vibration isolating system, a spherical hydraulic bearing, a spherical torque motor, a coarse drive and airworthiness relevant components like brakes, hard-stops etc. The components were manufactured under airworthiness standards by dedicated suppliers and integrated and commissioned in 2001/2002 at MAN Technologie in Augsburg. The paper describes the experience gotten during the manufacturing and integration process.
Uncooled infrared sensors for an integrated sniper location system
NASA Astrophysics Data System (ADS)
Spera, Timothy J.; Figler, Burton D.
1997-02-01
Since July of 1995, Lockheed Martin IR Imaging Systems of Lexington, Massachusetts has been developing an integrated sniper location system for the Advanced Research Projects Agency (ARPA) and for the Department of the Navy's Naval Command Control & Ocean Surveillance Center, RDTE Division in San Diego, California. This system integrates two technologies to provide an affordable and highly effective sniper detection and location capability. The integrated sniper location system is being developed for use by the military and by law enforcement agencies. It will be man portable and can be used by individuals, at fixed ground sites, on ground vehicles, and on low flying aircraft. The integrated sniper location system combines an acoustic warning system with an uncooled infrared warning system. The acoustic warner is being developed by SenTech, Inc. of Lexington, Massachusetts. This acoustic warner provides sniper detection and coarse location information based upon the muzzle blast of the sniper's weapon and/or upon the shock wave produced by the sniper's bullet, if the bullet is supersonic. The uncooled infrared warning system provides sniper detection and fine location information based upon the weapons's muzzle flash. Combining the two technologies improves detection probability and reduces false alarm rate. This paper describes the integrated sniper location system, focusing on the uncooled infrared sensor and its associated signal processing. In addition, preliminary results from Phase I testing of the system are presented. Finally, the paper addresses the plans for implementing Phases II and III, during which the system will be optimized in terms of detection and location performance, size, weight, power, and cost.
Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms
NASA Technical Reports Server (NTRS)
Heidmann, James D.; Hunter, Scott D.
2001-01-01
The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.
NASA Technical Reports Server (NTRS)
Leptoukh, Gregory
2006-01-01
The presentation describes the recently awarded ACCESS project to provide data management of NASA remote sensing data for the Northern Eurasia Earth Science Partnership Initiative (NEESPI). The project targets integration of remote sensing data from MODIS, and other NASA instruments on board US-satellites (with potential expansion to data from non-US satellites), customized data products from climatology data sets (e.g., ISCCP, ISLSCP) and model data (e.g., NCEP/NCAR) into a single, well-architected data management system. It will utilize two existing components developed by the Goddard Earth Sciences Data & Information Services Center (GES DISC) at the NASA Goddard Space Flight Center: (1) online archiving and distribution system, that allows collection, processing and ingest of data from various sources into the online archive, and (2) user-friendly intelligent web-based online visualization and analysis system, also known as Giovanni. The former includes various kinds of data preparation for seamless interoperability between measurements by different instruments. The latter provides convenient access to various geophysical parameters measured in the Northern Eurasia region without any need to learn complicated remote sensing data formats, or retrieve and process large volumes of NASA data. Initial implementation of this data management system will concentrate on atmospheric data and surface data aggregated to coarse resolution to support collaborative environment and climate change studies and modeling, while at later stages, data from NASA and non-NASA satellites at higher resolution will be integrated into the system.
NASA Astrophysics Data System (ADS)
Dinsmore, P.; Prepas, E.; Putz, G.; Smith, D.
2008-12-01
The Forest Watershed and Riparian Disturbance (FORWARD) Project has collected data on weather, soils, vegetation, streamflow and stream water quality under relatively undisturbed conditions, as well as after experimental forest harvest, in partnership with industrial forest operations within the Boreal Plain and Boreal Shield ecozones of Canada. Research-based contributions from FORWARD were integrated into our Boreal Plain industry partner's 2007-2016 Detailed Forest Management Plan. These contributions consisted of three components: 1) A GIS watershed and stream layer that included a hydrological network, a Digital Elevation Model, and Strahler classified streams and watersheds for 1st- and 3rd-order watersheds; 2) a combined soil and wetland GIS layer that included maps and associated datasets for relatively coarse mineral soils (which drain quickly) and wetlands (which retain water), which were the key features that needed to be identified for the FORWARD modelling effort; and 3) a lookup table was developed that permits planners to determine runoff coefficients (the variable selected for hydrological modelling) for 1st-order watersheds, based upon slope, vegetation and soil attributes in forest polygons. The lookup table was populated with output from the deterministic Soil and Water Assessment Tool (SWAT), adapted for boreal forest vegetation with a version of the plant growth model, ALMANAC. The runoff coefficient lookup table facilitated integration of predictions of hydrologic impacts of forest harvest into planning. This pilot-scale effort will ultimately be extended to the Boreal Shield study area.
Using a Coupled Lake Model with WRF for Dynamical Downscaling
The Weather Research and Forecasting (WRF) model is used to downscale a coarse reanalysis (National Centers for Environmental Prediction–Department of Energy Atmospheric Model Intercomparison Project reanalysis, hereafter R2) as a proxy for a global climate model (GCM) to examine...
Using recycled concrete as aggregate in concrete pavements to reduce materials cost.
DOT National Transportation Integrated Search
2013-08-01
The main objective of this project was to evaluate the effects of using aggregate produced from crushed concrete pavement as a replacement for natural (virgin) coarse aggregate in pavement mixtures. A total of ten different concrete mixtures containi...
Coarse climate change projections for species living in a fine-scaled world.
Nadeau, Christopher P; Urban, Mark C; Bridle, Jon R
2017-01-01
Accurately predicting biological impacts of climate change is necessary to guide policy. However, the resolution of climate data could be affecting the accuracy of climate change impact assessments. Here, we review the spatial and temporal resolution of climate data used in impact assessments and demonstrate that these resolutions are often too coarse relative to biologically relevant scales. We then develop a framework that partitions climate into three important components: trend, variance, and autocorrelation. We apply this framework to map different global climate regimes and identify where coarse climate data is most and least likely to reduce the accuracy of impact assessments. We show that impact assessments for many large mammals and birds use climate data with a spatial resolution similar to the biologically relevant area encompassing population dynamics. Conversely, impact assessments for many small mammals, herpetofauna, and plants use climate data with a spatial resolution that is orders of magnitude larger than the area encompassing population dynamics. Most impact assessments also use climate data with a coarse temporal resolution. We suggest that climate data with a coarse spatial resolution is likely to reduce the accuracy of impact assessments the most in climates with high spatial trend and variance (e.g., much of western North and South America) and the least in climates with low spatial trend and variance (e.g., the Great Plains of the USA). Climate data with a coarse temporal resolution is likely to reduce the accuracy of impact assessments the most in the northern half of the northern hemisphere where temporal climatic variance is high. Our framework provides one way to identify where improving the resolution of climate data will have the largest impact on the accuracy of biological predictions under climate change. © 2016 John Wiley & Sons Ltd.
A prototype coarse pointing mechanism for laser communication
NASA Astrophysics Data System (ADS)
Miller, Eric D.; DeSpenza, Michael; Gavrilyuk, Ilya; Nelson, Graham; Erickson, Brent; Edwards, Britney; Davis, Ethan; Truscott, Tony
2017-02-01
Laser communication systems promise orders-of-magnitude improvement in data throughput per unit SWaP (size, weight and power) compared to conventional RF systems. However, in order for lasercom to make sense economically as part of a worldwide connectivity solution, the cost per terminal still needs to be significantly reduced. In this paper, we describe a coarse pointing mechanism that has been designed with an emphasis on simplicity, making use of conventional materials and commercial off-the-shelf components wherever possible. An overview of the design architecture and trades is presented, along with various results and practical lessons learned during prototype integration and test.
DOT National Transportation Integrated Search
1998-09-01
This report summarizes 23 years of work undertaken in Texas to understand the reasons for significant performance differences found in pavements placed around the state. To a significant degree, pavement performance can be predicted based on the conc...
DOT National Transportation Integrated Search
2011-12-31
Twelve field projects were studied where forty-four locations were evaluated to assess the cause or : causes of asphalt concrete that exhibits tender zone characteristics (i.e. instability during compaction) and to : investigate the tendency of...
Dynamic subfilter-scale stress model for large-eddy simulations
NASA Astrophysics Data System (ADS)
Rouhi, A.; Piomelli, U.; Geurts, B. J.
2016-08-01
We present a modification of the integral length-scale approximation (ILSA) model originally proposed by Piomelli et al. [Piomelli et al., J. Fluid Mech. 766, 499 (2015), 10.1017/jfm.2015.29] and apply it to plane channel flow and a backward-facing step. In the ILSA models the length scale is expressed in terms of the integral length scale of turbulence and is determined by the flow characteristics, decoupled from the simulation grid. In the original formulation the model coefficient was constant, determined by requiring a desired global contribution of the unresolved subfilter scales (SFSs) to the dissipation rate, known as SFS activity; its value was found by a set of coarse-grid calculations. Here we develop two modifications. We de-fine a measure of SFS activity (based on turbulent stresses), which adds to the robustness of the model, particularly at high Reynolds numbers, and removes the need for the prior coarse-grid calculations: The model coefficient can be computed dynamically and adapt to large-scale unsteadiness. Furthermore, the desired level of SFS activity is now enforced locally (and not integrated over the entire volume, as in the original model), providing better control over model activity and also improving the near-wall behavior of the model. Application of the local ILSA to channel flow and a backward-facing step and comparison with the original ILSA and with the dynamic model of Germano et al. [Germano et al., Phys. Fluids A 3, 1760 (1991), 10.1063/1.857955] show better control over the model contribution in the local ILSA, while the positive properties of the original formulation (including its higher accuracy compared to the dynamic model on coarse grids) are maintained. The backward-facing step also highlights the advantage of the decoupling of the model length scale from the mesh.
40 CFR 430.75 - New source performance standards (NSPS).
Code of Federal Regulations, 2010 CFR
2010-07-01
... GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Mechanical Pulp Subcategory § 430.75 New source performance standards (NSPS). (a) The following applies to mechanical pulp...-mechanical process; mechanical pulp facilities where the integrated production of pulp and coarse paper...
Space Debris Detection on the HPDP, a Coarse-Grained Reconfigurable Array Architecture for Space
NASA Astrophysics Data System (ADS)
Suarez, Diego Andres; Bretz, Daniel; Helfers, Tim; Weidendorfer, Josef; Utzmann, Jens
2016-08-01
Stream processing, widely used in communications and digital signal processing applications, requires high- throughput data processing that is achieved in most cases using Application-Specific Integrated Circuit (ASIC) designs. Lack of programmability is an issue especially in space applications, which use on-board components with long life-cycles requiring applications updates. To this end, the High Performance Data Processor (HPDP) architecture integrates an array of coarse-grained reconfigurable elements to provide both flexible and efficient computational power suitable for stream-based data processing applications in space. In this work the capabilities of the HPDP architecture are demonstrated with the implementation of a real-time image processing algorithm for space debris detection in a space-based space surveillance system. The implementation challenges and alternatives are described making trade-offs to improve performance at the expense of negligible degradation of detection accuracy. The proposed implementation uses over 99% of the available computational resources. Performance estimations based on simulations show that the HPDP can amply match the application requirements.
NASA Astrophysics Data System (ADS)
Sutherland, Andrew B.; Culp, Joseph M.; Benoy, Glenn A.
2012-07-01
The objective of this study was to evaluate which macroinvertebrate and deposited sediment metrics are best for determining effects of excessive sedimentation on stream integrity. Fifteen instream sediment metrics, with the strongest relationship to land cover, were compared to riffle macroinvertebrate metrics in streams ranging across a gradient of land disturbance. Six deposited sediment metrics were strongly related to the relative abundance of Ephemeroptera, Plecoptera and Trichoptera and six were strongly related to the modified family biotic index (MFBI). Few functional feeding groups and habit groups were significantly related to deposited sediment, and this may be related to the focus on riffle, rather than reach-wide macroinvertebrates, as reach-wide sediment metrics were more closely related to human land use. Our results suggest that the coarse-level deposited sediment metric, visual estimate of fines, and the coarse-level biological index, MFBI, may be useful in biomonitoring efforts aimed at determining the impact of anthropogenic sedimentation on stream biotic integrity.
Sutherland, Andrew B; Culp, Joseph M; Benoy, Glenn A
2012-07-01
The objective of this study was to evaluate which macroinvertebrate and deposited sediment metrics are best for determining effects of excessive sedimentation on stream integrity. Fifteen instream sediment metrics, with the strongest relationship to land cover, were compared to riffle macroinvertebrate metrics in streams ranging across a gradient of land disturbance. Six deposited sediment metrics were strongly related to the relative abundance of Ephemeroptera, Plecoptera and Trichoptera and six were strongly related to the modified family biotic index (MFBI). Few functional feeding groups and habit groups were significantly related to deposited sediment, and this may be related to the focus on riffle, rather than reach-wide macroinvertebrates, as reach-wide sediment metrics were more closely related to human land use. Our results suggest that the coarse-level deposited sediment metric, visual estimate of fines, and the coarse-level biological index, MFBI, may be useful in biomonitoring efforts aimed at determining the impact of anthropogenic sedimentation on stream biotic integrity.
NASA Astrophysics Data System (ADS)
Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.
2009-04-01
An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very similar results either for temperature or precipitation and no configuration seems to outperform the others both for the whole region and for every season. Nevertheless, some marked differences between areas within the domain appear when analyzing certain physics options, particularly for precipitation. Some of the physics options, such as radiation, have little impact on model performance with respect to precipitation and results do not vary when the scheme is modified. On the other hand, cumulus and boundary layer parameterizations are responsible for most of the differences obtained between configurations. Acknowledgements: The Spanish Ministry of Science and Innovation, with additional support from the European Community Funds (FEDER), project CGL2007-61151/CLI, and the Regional Government of Andalusia project P06-RNM-01622, have financed this study. The "Centro de Servicios de Informática y Redes de Comunicaciones" (CSIRC), Universidad de Granada, has provided the computing time. Key words: MM5 mesoscale model, parameterizations schemes, temperature and precipitation, South of Spain.
Sander, S; Behnisch, J; Wagner, M
2017-02-01
With the MBBR IFAS (moving bed biofilm reactor integrated fixed-film activated sludge) process, the biomass required for biological wastewater treatment is either suspended or fixed on free-moving plastic carriers in the reactor. Coarse- or fine-bubble aeration systems are used in the MBBR IFAS process. In this study, the oxygen transfer efficiency (OTE) of a coarse-bubble aeration system was improved significantly by the addition of the investigated carriers, even in-process (∼1% per vol-% of added carrier material). In a fine-bubble aeration system, the carriers had little or no effect on OTE. The effect of carriers on OTE strongly depends on the properties of the aeration system, the volumetric filling rate of the carriers, the properties of the carrier media, and the reactor geometry. This study shows that the effect of carriers on OTE is less pronounced in-process compared to clean water conditions. When designing new carriers in order to improve their effect on OTE further, suppliers should take this into account. Although the energy efficiency and cost effectiveness of coarse-bubble aeration systems can be improved significantly by the addition of carriers, fine-bubble aeration systems remain the more efficient and cost-effective alternative for aeration when applying the investigated MBBR IFAS process.
NASA Astrophysics Data System (ADS)
King, C. H.; Wagenbrenner, J.; Fedora, M.; Watkins, D.; Watkins, M. K.; Huckins, C.
2017-12-01
The Great Lakes Region of North America has experienced more frequent extreme precipitation events in recent decades, resulting in a large number of stream crossing failures. While there are accepted methods for designing stream crossings to accommodate peak storm discharges, less attention has been paid to assessing the risk of failure. To evaluate failure risk and potential impacts, coarse-resolution stream crossing surveys were completed on 51 stream crossings and dams in the North Branch Paint River watershed in Michigan's Upper Peninsula. These inventories determined stream crossing dimensions along with stream and watershed characteristics. Eleven culverts were selected from the coarse surveys for high resolution hydraulic analysis to estimate discharge conditions expected at crossing failure. Watershed attributes upstream of the crossing, including area, slope, and storage, were acquired. Sediment discharge and the economic impact associated with a failure event were also estimated for each stream crossing. Impacts to stream connectivity and fish passability were assessed from the coarse-level surveys. Using information from both the coarse and high-resolution surveys, we also developed indicators to predict failure risk without the need for complex hydraulic modeling. These passability scores and failure risk indicators will help to prioritize infrastructure replacement and improve the overall connectivity of river systems throughout the upper Great Lakes Region.
Mori-Zwanzig theory for dissipative forces in coarse-grained dynamics in the Markov limit
NASA Astrophysics Data System (ADS)
Izvekov, Sergei
2017-01-01
We derive alternative Markov approximations for the projected (stochastic) force and memory function in the coarse-grained (CG) generalized Langevin equation, which describes the time evolution of the center-of-mass coordinates of clusters of particles in the microscopic ensemble. This is done with the aid of the Mori-Zwanzig projection operator method based on the recently introduced projection operator [S. Izvekov, J. Chem. Phys. 138, 134106 (2013), 10.1063/1.4795091]. The derivation exploits the "generalized additive fluctuating force" representation to which the projected force reduces in the adopted projection operator formalism. For the projected force, we present a first-order time expansion which correctly extends the static fluctuating force ansatz with the terms necessary to maintain the required orthogonality of the projected dynamics in the Markov limit to the space of CG phase variables. The approximant of the memory function correctly accounts for the momentum dependence in the lowest (second) order and indicates that such a dependence may be important in the CG dynamics approaching the Markov limit. In the case of CG dynamics with a weak dependence of the memory effects on the particle momenta, the expression for the memory function presented in this work is applicable to non-Markov systems. The approximations are formulated in a propagator-free form allowing their efficient evaluation from the microscopic data sampled by standard molecular dynamics simulations. A numerical application is presented for a molecular liquid (nitromethane). With our formalism we do not observe the "plateau-value problem" if the friction tensors for dissipative particle dynamics (DPD) are computed using the Green-Kubo relation. Our formalism provides a consistent bottom-up route for hierarchical parametrization of DPD models from atomistic simulations.
GPS Integrity Channel RTCA Working Group recommendations
NASA Astrophysics Data System (ADS)
Kalafus, Rudolph M.
Recommendations made by a working group established by the Radio Technical Commission for Aeronautics are presented for the design of a wide-area broadcast service to provide indications on the status of GPS satellites. The integrity channel requirements and operational goals are outlined. Six integrity channel system concepts are considered and system design and time-to-alarm considerations are examined. The recommended system includes the broadcast of a coarse range measurement for each satellite which will enable the on-board GPS receiver to determine whether or not the navigation accuracy is within prescribed limits.
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne E.
2013-01-01
We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.
40 CFR 430.77 - Pretreatment standards for new sources (PSNS).
Code of Federal Regulations, 2010 CFR
2010-07-01
...) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Mechanical Pulp Subcategory § 430.77 Pretreatment standards for new sources (PSNS). (a) The following applies to mechanical... thermo-mechanical process; mechanical pulp facilities where the integrated production of pulp and coarse...
Mapping population-based structural connectomes.
Zhang, Zhengwu; Descoteaux, Maxime; Zhang, Jingwen; Girard, Gabriel; Chamberland, Maxime; Dunson, David; Srivastava, Anuj; Zhu, Hongtu
2018-05-15
Advances in understanding the structural connectomes of human brain require improved approaches for the construction, comparison and integration of high-dimensional whole-brain tractography data from a large number of individuals. This article develops a population-based structural connectome (PSC) mapping framework to address these challenges. PSC simultaneously characterizes a large number of white matter bundles within and across different subjects by registering different subjects' brains based on coarse cortical parcellations, compressing the bundles of each connection, and extracting novel connection weights. A robust tractography algorithm and streamline post-processing techniques, including dilation of gray matter regions, streamline cutting, and outlier streamline removal are applied to improve the robustness of the extracted structural connectomes. The developed PSC framework can be used to reproducibly extract binary networks, weighted networks and streamline-based brain connectomes. We apply the PSC to Human Connectome Project data to illustrate its application in characterizing normal variations and heritability of structural connectomes in healthy subjects. Copyright © 2018 Elsevier Inc. All rights reserved.
Automated array assembly task development of low-cost polysilicon solar cells
NASA Technical Reports Server (NTRS)
Jones, G. T.
1980-01-01
Development of low cost, large area polysilicon solar cells was conducted in this program. Three types of polysilicon materialk were investigated. A theoretical and experimenal comparison between single crystal silicon and polysilicon solar cell efficiency was performed. Significant electrical performance differences were observed between types of wafer material, i.e. fine grain and coarse grain polysilicon and single crystal silicon. Efficiency degradation due to grain boundaries in fin grain and coarse grain polysilicon was shown to be small. It was demonstrated that 10 percent efficient polysilicon solar cells can be produced with spray on n+ dopants. This result fulfills an important goal of this project, which is the production of batch quantity of 10 percent efficient polysilicon solar cells.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Mechanical Pulp Subcategory § 430.73 Effluent limitations guidelines representing the degree of effluent...) The following applies to: mechanical pulp facilities where the integrated production of pulp and coarse paper, molded pulp products, and newsprint at groundwood mills occurs; and mechanical pulp...
Population exposure to ambient particulate matter (PM) has received considerable attention due to the association between ambient particulate concentrations and mortality. Current toxicological studies and controlled human and animal exposures suggest that all size fractions of...
PMHOME: A DATABASE OF CONTINUOUS PARTICLE MEASUREMENTS IN AN OCCUPIED HOUSE OVER A FOUR-YEAR PERIOD
Although considerable data exist on 24-hour integrated measurements of fine and coarse particles indoors, much less information is available on moment-to-moment variation for a full range of particle sizes including ultrafine particles. Also, information is limited on the rela...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Druinsky, Alex; Ghysels, Pieter; Li, Xiaoye S.
In this paper, we study the performance of a two-level algebraic-multigrid algorithm, with a focus on the impact of the coarse-grid solver on performance. We consider two algorithms for solving the coarse-space systems: the preconditioned conjugate gradient method and a new robust HSS-embedded low-rank sparse-factorization algorithm. Our test data comes from the SPE Comparative Solution Project for oil-reservoir simulations. We contrast the performance of our code on one 12-core socket of a Cray XC30 machine with performance on a 60-core Intel Xeon Phi coprocessor. To obtain top performance, we optimized the code to take full advantage of fine-grained parallelism andmore » made it thread-friendly for high thread count. We also developed a bounds-and-bottlenecks performance model of the solver which we used to guide us through the optimization effort, and also carried out performance tuning in the solver’s large parameter space. Finally, as a result, significant speedups were obtained on both machines.« less
Coarse-grained simulations of protein-protein association: an energy landscape perspective.
Ravikumar, Krishnakumar M; Huang, Wei; Yang, Sichun
2012-08-22
Understanding protein-protein association is crucial in revealing the molecular basis of many biological processes. Here, we describe a theoretical simulation pipeline to study protein-protein association from an energy landscape perspective. First, a coarse-grained model is implemented and its applications are demonstrated via molecular dynamics simulations for several protein complexes. Second, an enhanced search method is used to efficiently sample a broad range of protein conformations. Third, multiple conformations are identified and clustered from simulation data and further projected on a three-dimensional globe specifying protein orientations and interacting energies. Results from several complexes indicate that the crystal-like conformation is favorable on the energy landscape even if the landscape is relatively rugged with metastable conformations. A closer examination on molecular forces shows that the formation of associated protein complexes can be primarily electrostatics-driven, hydrophobics-driven, or a combination of both in stabilizing specific binding interfaces. Taken together, these results suggest that the coarse-grained simulations and analyses provide an alternative toolset to study protein-protein association occurring in functional biomolecular complexes. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Coarse-Grained Simulations of Protein-Protein Association: An Energy Landscape Perspective
Ravikumar, Krishnakumar M.; Huang, Wei; Yang, Sichun
2012-01-01
Understanding protein-protein association is crucial in revealing the molecular basis of many biological processes. Here, we describe a theoretical simulation pipeline to study protein-protein association from an energy landscape perspective. First, a coarse-grained model is implemented and its applications are demonstrated via molecular dynamics simulations for several protein complexes. Second, an enhanced search method is used to efficiently sample a broad range of protein conformations. Third, multiple conformations are identified and clustered from simulation data and further projected on a three-dimensional globe specifying protein orientations and interacting energies. Results from several complexes indicate that the crystal-like conformation is favorable on the energy landscape even if the landscape is relatively rugged with metastable conformations. A closer examination on molecular forces shows that the formation of associated protein complexes can be primarily electrostatics-driven, hydrophobics-driven, or a combination of both in stabilizing specific binding interfaces. Taken together, these results suggest that the coarse-grained simulations and analyses provide an alternative toolset to study protein-protein association occurring in functional biomolecular complexes. PMID:22947945
General MoM Solutions for Large Arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fasenfest, B; Capolino, F; Wilton, D R
2003-07-22
This paper focuses on a numerical procedure that addresses the difficulties of dealing with large, finite arrays while preserving the generality and robustness of full-wave methods. We present a fast method based on approximating interactions between sufficiently separated array elements via a relatively coarse interpolation of the Green's function on a uniform grid commensurate with the array's periodicity. The interaction between the basis and testing functions is reduced to a three-stage process. The first stage is a projection of standard (e.g., RWG) subdomain bases onto a set of interpolation functions that interpolate the Green's function on the array face. Thismore » projection, which is used in a matrix/vector product for each array cell in an iterative solution process, need only be carried out once for a single cell and results in a low-rank matrix. An intermediate stage matrix/vector product computation involving the uniformly sampled Green's function is of convolutional form in the lateral (transverse) directions so that a 2D FFT may be used. The final stage is a third matrix/vector product computation involving a matrix resulting from projecting testing functions onto the Green's function interpolation functions; the low-rank matrix is either identical to (using Galerkin's method) or similar to that for the bases projection. An effective MoM solution scheme is developed for large arrays using a modification of the AIM (Adaptive Integral Method) method. The method permits the analysis of arrays with arbitrary contours and nonplanar elements. Both fill and solve times within the MoM method are improved with respect to more standard MoM solvers.« less
Using GIS and coarse-scale, publicly-available data, the GLEI project has defined the landscape character of areas draining to 76d2 shoreline segments - the entire US portion of the Great Lakes basin. Using principal components and clustering analyses to discriminate among the se...
In this study, techniques typically used for future air quality projections are applied to a historical 11-year period to assess the performance of the modeling system when the driving meteorological conditions are obtained using dynamical downscaling of coarse-scale fields witho...
Dynamics of essential collective motions in proteins: Theory
NASA Astrophysics Data System (ADS)
Stepanova, Maria
2007-11-01
A general theoretical background is introduced for characterization of conformational motions in protein molecules, and for building reduced coarse-grained models of proteins, based on the statistical analysis of their phase trajectories. Using the projection operator technique, a system of coupled generalized Langevin equations is derived for essential collective coordinates, which are generated by principal component analysis of molecular dynamic trajectories. The number of essential degrees of freedom is not limited in the theory. An explicit analytic relation is established between the generalized Langevin equation for essential collective coordinates and that for the all-atom phase trajectory projected onto the subspace of essential collective degrees of freedom. The theory introduced is applied to identify correlated dynamic domains in a macromolecule and to construct coarse-grained models representing the conformational motions in a protein through a few interacting domains embedded in a dissipative medium. A rigorous theoretical background is provided for identification of dynamic correlated domains in a macromolecule. Examples of domain identification in protein G are given and employed to interpret NMR experiments. Challenges and potential outcomes of the theory are discussed.
Goal-oriented rectification of camera-based document images.
Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J
2011-04-01
Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.
NASA Astrophysics Data System (ADS)
Cook, G. D.; Liedloff, A. C.; Richards, A. E.; Meyer, M.
2016-12-01
Australia is the only OECD country with a significant area of tropical savannas within it borders. Approximately 220 000 km2 of these savannas burn every year releasing 2 to 4 % of Australia's accountable greenhouse gas emissions. Reduction in uncertainty in the quantification of these emissions of methane and nitrous has been fundamental to improving both the national GHG inventory and developing approaches to better manage land to reduce these emissions. Projects to reduce pyrogenic emissions have been adopted across 30% of Australia's high rainfall savannas. Recent work has focussed on quantifying the additional benefit of increased carbon stocks in fine fuel and coarse woody debris (CWD) resulting from improvements in fire management. An integrated set of equations have been developed to enable seemless quantification of emissions and sequestration in these frequently burnt savannas. These show that increases in carbon stored in fine fuel and CWD comprises about 3 times the emissions abatement from improvements in fire management that have been achieved in a project area of 28 000 km2. Future work is focussing on improving the understanding of spatial and temporal variation in fire behaviour across Australia's savanna biome, improvements in quantification of carbon dynamics of CWD and improved quantification of the effects of fire on carbon dynamics in soils of the savannas.
Plotnikov, Nikolay V
2014-08-12
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.
2015-01-01
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force. PMID:25136268
NASA Technical Reports Server (NTRS)
Sellers, P. J.; Berry, J. A.; Collatz, G. J.; Field, C. B.; Hall, F. G.
1992-01-01
The theoretical analyses of Sellers (1985, 1987), which linked canopy spectral reflectance properties to (unstressed) photosynthetic rates and conductances, are critically reviewed and significant shortcomings are identified. These are addressed in this article principally through the incorporation of a more sophisticated and realistic treatment of leaf physiological processes within a new canopy integration scheme. The results indicate that area-averaged spectral vegetation indices, as obtained from coarse resolution satellite sensors, may give good estimates of the area-integrals of photosynthesis and conductance even for spatially heterogenous (though physiologically uniform) vegetation covers.
The derivation and approximation of coarse-grained dynamics from Langevin dynamics
NASA Astrophysics Data System (ADS)
Ma, Lina; Li, Xiantao; Liu, Chun
2016-11-01
We present a derivation of a coarse-grained description, in the form of a generalized Langevin equation, from the Langevin dynamics model that describes the dynamics of bio-molecules. The focus is placed on the form of the memory kernel function, the colored noise, and the second fluctuation-dissipation theorem that connects them. Also presented is a hierarchy of approximations for the memory and random noise terms, using rational approximations in the Laplace domain. These approximations offer increasing accuracy. More importantly, they eliminate the need to evaluate the integral associated with the memory term at each time step. Direct sampling of the colored noise can also be avoided within this framework. Therefore, the numerical implementation of the generalized Langevin equation is much more efficient.
Algorithms for tensor network renormalization
NASA Astrophysics Data System (ADS)
Evenbly, G.
2017-01-01
We discuss in detail algorithms for implementing tensor network renormalization (TNR) for the study of classical statistical and quantum many-body systems. First, we recall established techniques for how the partition function of a 2 D classical many-body system or the Euclidean path integral of a 1 D quantum system can be represented as a network of tensors, before describing how TNR can be implemented to efficiently contract the network via a sequence of coarse-graining transformations. The efficacy of the TNR approach is then benchmarked for the 2 D classical statistical and 1 D quantum Ising models; in particular the ability of TNR to maintain a high level of accuracy over sustained coarse-graining transformations, even at a critical point, is demonstrated.
A Multi-Scale, Integrated Approach to Representing Watershed Systems
NASA Astrophysics Data System (ADS)
Ivanov, Valeriy; Kim, Jongho; Fatichi, Simone; Katopodes, Nikolaos
2014-05-01
Understanding and predicting process dynamics across a range of scales are fundamental challenges for basic hydrologic research and practical applications. This is particularly true when larger-spatial-scale processes, such as surface-subsurface flow and precipitation, need to be translated to fine space-time scale dynamics of processes, such as channel hydraulics and sediment transport, that are often of primary interest. Inferring characteristics of fine-scale processes from uncertain coarse-scale climate projection information poses additional challenges. We have developed an integrated model simulating hydrological processes, flow dynamics, erosion, and sediment transport, tRIBS+VEGGIE-FEaST. The model targets to take the advantage of the current generation of wealth of data representing watershed topography, vegetation, soil, and landuse, as well as to explore the hydrological effects of physical factors and their feedback mechanisms over a range of scales. We illustrate how the modeling system connects precipitation-hydrologic runoff partition process to the dynamics of flow, erosion, and sedimentation, and how the soil's substrate condition can impact the latter processes, resulting in a non-unique response. We further illustrate an approach to using downscaled climate change information with a process-based model to infer the moments of hydrologic variables in future climate conditions and explore the impact of climate information uncertainty.
Coarse-Graining of Polymer Dynamics via Energy Renormalization
NASA Astrophysics Data System (ADS)
Xia, Wenjie; Song, Jake; Phelan, Frederick; Douglas, Jack; Keten, Sinan
The computational prediction of the properties of polymeric materials to serve the needs of materials design and prediction of their performance is a grand challenge due to the prohibitive computational times of all-atomistic (AA) simulations. Coarse-grained (CG) modeling is an essential strategy for making progress on this problem. While there has been intense activity in this area, effective methods of coarse-graining have been slow to develop. Our approach to this fundamental problem starts from the observation that integrating out degrees of freedom of the AA model leads to a strong modification of the configurational entropy and cohesive interaction. Based on this observation, we propose a temperature-dependent systematic renormalization of the cohesive interaction in the CG modeling to recover the thermodynamic modifications in the system and the dynamics of the AA model. Here, we show that this energy renormalization approach to CG can faithfully estimate the diffusive, segmental and glassy dynamics of the AA model over a large temperature range spanning from the Arrhenius melt to the non-equilibrium glassy states. Our proposed CG strategy offers a promising strategy for developing thermodynamically consistent CG models with temperature transferability.
Konrad, Christopher P.
2015-01-01
Ecological functions and flood-related risks were assessed for floodplains along the 17 major rivers flowing into Puget Sound Basin, Washington. The assessment addresses five ecological functions, five components of flood-related risks at two spatial resolutions—fine and coarse. The fine-resolution assessment compiled spatial attributes of floodplains from existing, publically available sources and integrated the attributes into 10-meter rasters for each function, hazard, or exposure. The raster values generally represent different types of floodplains with regard to each function, hazard, or exposure rather than the degree of function, hazard, or exposure. The coarse-resolution assessment tabulates attributes from the fine-resolution assessment for larger floodplain units, which are floodplains associated with 0.1 to 21-kilometer long segments of major rivers. The coarse-resolution assessment also derives indices that can be used to compare function or risk among different floodplain units and to develop normative (based on observed distributions) standards. The products of the assessment are available online as geospatial datasets (Konrad, 2015; http://dx.doi.org/10.5066/F7DR2SJC).
Analysis of DNA Sequences by an Optical ime-Integrating Correlator: Proposal
1991-11-01
CURRENT TECHNOLOGY 2 3.0 TIME-INTEGRATING CORRELATOR 2 4.0 REPRESENTATIONS OF THE DNA BASES 8 5.0 DNA ANALYSIS STRATEGY 8 6.0 STRATEGY FOR COARSE...1)-correlation peak formed by the AxB term and (2)-pedestal formed by the A + B terms. 7 Figure 4: Short representations of the DNA bases where each...linear scale. 15 x LIST OF TABLES PAGE Table 1: Short representations of the DNA bases where each base is represented by 7-bits long pseudorandom
A reductionist perspective on quantum statistical mechanics: Coarse-graining of path integrals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinitskiy, Anton V.; Voth, Gregory A., E-mail: gavoth@uchicago.edu
2015-09-07
Computational modeling of the condensed phase based on classical statistical mechanics has been rapidly developing over the last few decades and has yielded important information on various systems containing up to millions of atoms. However, if a system of interest contains important quantum effects, well-developed classical techniques cannot be used. One way of treating finite temperature quantum systems at equilibrium has been based on Feynman’s imaginary time path integral approach and the ensuing quantum-classical isomorphism. This isomorphism is exact only in the limit of infinitely many classical quasiparticles representing each physical quantum particle. In this work, we present a reductionistmore » perspective on this problem based on the emerging methodology of coarse-graining. This perspective allows for the representations of one quantum particle with only two classical-like quasiparticles and their conjugate momenta. One of these coupled quasiparticles is the centroid particle of the quantum path integral quasiparticle distribution. Only this quasiparticle feels the potential energy function. The other quasiparticle directly provides the observable averages of quantum mechanical operators. The theory offers a simplified perspective on quantum statistical mechanics, revealing its most reductionist connection to classical statistical physics. By doing so, it can facilitate a simpler representation of certain quantum effects in complex molecular environments.« less
A reductionist perspective on quantum statistical mechanics: Coarse-graining of path integrals.
Sinitskiy, Anton V; Voth, Gregory A
2015-09-07
Computational modeling of the condensed phase based on classical statistical mechanics has been rapidly developing over the last few decades and has yielded important information on various systems containing up to millions of atoms. However, if a system of interest contains important quantum effects, well-developed classical techniques cannot be used. One way of treating finite temperature quantum systems at equilibrium has been based on Feynman's imaginary time path integral approach and the ensuing quantum-classical isomorphism. This isomorphism is exact only in the limit of infinitely many classical quasiparticles representing each physical quantum particle. In this work, we present a reductionist perspective on this problem based on the emerging methodology of coarse-graining. This perspective allows for the representations of one quantum particle with only two classical-like quasiparticles and their conjugate momenta. One of these coupled quasiparticles is the centroid particle of the quantum path integral quasiparticle distribution. Only this quasiparticle feels the potential energy function. The other quasiparticle directly provides the observable averages of quantum mechanical operators. The theory offers a simplified perspective on quantum statistical mechanics, revealing its most reductionist connection to classical statistical physics. By doing so, it can facilitate a simpler representation of certain quantum effects in complex molecular environments.
NASA Astrophysics Data System (ADS)
Sieradzan, Adam K.; Makowski, Mariusz; Augustynowicz, Antoni; Liwo, Adam
2017-03-01
A general and systematic method for the derivation of the functional expressions for the effective energy terms in coarse-grained force fields of polymer chains is proposed. The method is based on the expansion of the potential of mean force of the system studied in the cluster-cumulant series and expanding the all-atom energy in the Taylor series in the squares of interatomic distances about the squares of the distances between coarse-grained centers, to obtain approximate analytical expressions for the cluster cumulants. The primary degrees of freedom to average about are the angles for collective rotation of the atoms contained in the coarse-grained interaction sites about the respective virtual-bond axes. The approach has been applied to the revision of the virtual-bond-angle, virtual-bond-torsional, and backbone-local-and-electrostatic correlation potentials for the UNited RESidue (UNRES) model of polypeptide chains, demonstrating the strong dependence of the torsional and correlation potentials on virtual-bond angles, not considered in the current UNRES. The theoretical considerations are illustrated with the potentials calculated from the ab initio potential-energy surface of terminally blocked alanine by numerical integration and with the statistical potentials derived from known protein structures. The revised torsional potentials correctly indicate that virtual-bond angles close to 90° result in the preference for the turn and helical structures, while large virtual-bond angles result in the preference for polyproline II and extended backbone geometry. The revised correlation potentials correctly reproduce the preference for the formation of β-sheet structures for large values of virtual-bond angles and for the formation of α-helical structures for virtual-bond angles close to 90°.
Evaluation of actuators for the SDOF and MDOF active microgravity isolation systems
NASA Technical Reports Server (NTRS)
1993-01-01
The University of Virginia examined the design of actuators for both single-degree-of-freedom (SDOF) and multiple-degree-of-freedom (MDOF) active microgravity isolation systems. For SDOF systems, two actuators were considered: a special large gap magnetic actuator and a large stroke Lorentz actuator. The magnetic actuator was viewed to be of greater difficulty than the Lorentz actuator with little compelling technical advantage and was dropped from consideration. A Lorentz actuator was designed and built for the SDOF test rig using magnetic circuit and finite element analysis. The design and some experimental results are discussed. The University also examined the design of actuators for MDOF isolation systems. This includes design of an integrated 1 cm gap 6-DOF noncontacting magnetic suspension system and of a 'coarse' follower which permits the practical extension of magnetic suspension to large strokes. The proposed 'coarse' actuator was a closed kinematic chain manipulator known as a Stewart Platform. The integration of the two isolation systems together, the isolation tasks assigned to each, and possible control architectures were also explored. The results of this research are examined.
Joseph St. Peter; John Hogland; Nathaniel Anderson; Jason Drake; Paul Medley
2018-01-01
Land cover classification provides valuable information for prioritizing management and conservation operations across large landscapes. Current regional scale land cover geospatial products within the United States have a spatial resolution that is too coarse to provide the necessary information for operations at the local and project scales. This paper describes a...
Microclimate predicts within-season distribution dynamics of montane forest birds
Sarah J.K. Frey; Adam S. Hadley; Matthew G. Betts; Mark Robertson
2016-01-01
Aim: Climate changes are anticipated to have pervasive negative effects on biodiversity and are expected to necessitate widespread range shifts or contractions. Such projections are based upon the assumptions that (1) species respond primarily to broad-scale climatic regimes, or (2) that variation in climate at fine spatial scales is less relevant at coarse spatial...
Geometric Integration of Hybrid Correspondences for RGB-D Unidirectional Tracking
Tang, Shengjun; Chen, Wu; Wang, Weixi; Li, Xiaoming; Li, Wenbin; Huang, Zhengdong; Hu, Han; Guo, Renzhong
2018-01-01
Traditionally, visual-based RGB-D SLAM systems only use correspondences with valid depth values for camera tracking, thus ignoring the regions without 3D information. Due to the strict limitation on measurement distance and view angle, such systems adopt only short-range constraints which may introduce larger drift errors during long-distance unidirectional tracking. In this paper, we propose a novel geometric integration method that makes use of both 2D and 3D correspondences for RGB-D tracking. Our method handles the problem by exploring visual features both when depth information is available and when it is unknown. The system comprises two parts: coarse pose tracking with 3D correspondences, and geometric integration with hybrid correspondences. First, the coarse pose tracking generates the initial camera pose using 3D correspondences with frame-by-frame registration. The initial camera poses are then used as inputs for the geometric integration model, along with 3D correspondences, 2D-3D correspondences and 2D correspondences identified from frame pairs. The initial 3D location of the correspondence is determined in two ways, from depth image and by using the initial poses to triangulate. The model improves the camera poses and decreases drift error during long-distance RGB-D tracking iteratively. Experiments were conducted using data sequences collected by commercial Structure Sensors. The results verify that the geometric integration of hybrid correspondences effectively decreases the drift error and improves mapping accuracy. Furthermore, the model enables a comparative and synergistic use of datasets, including both 2D and 3D features. PMID:29723974
Geometric Integration of Hybrid Correspondences for RGB-D Unidirectional Tracking.
Tang, Shengjun; Chen, Wu; Wang, Weixi; Li, Xiaoming; Darwish, Walid; Li, Wenbin; Huang, Zhengdong; Hu, Han; Guo, Renzhong
2018-05-01
Traditionally, visual-based RGB-D SLAM systems only use correspondences with valid depth values for camera tracking, thus ignoring the regions without 3D information. Due to the strict limitation on measurement distance and view angle, such systems adopt only short-range constraints which may introduce larger drift errors during long-distance unidirectional tracking. In this paper, we propose a novel geometric integration method that makes use of both 2D and 3D correspondences for RGB-D tracking. Our method handles the problem by exploring visual features both when depth information is available and when it is unknown. The system comprises two parts: coarse pose tracking with 3D correspondences, and geometric integration with hybrid correspondences. First, the coarse pose tracking generates the initial camera pose using 3D correspondences with frame-by-frame registration. The initial camera poses are then used as inputs for the geometric integration model, along with 3D correspondences, 2D-3D correspondences and 2D correspondences identified from frame pairs. The initial 3D location of the correspondence is determined in two ways, from depth image and by using the initial poses to triangulate. The model improves the camera poses and decreases drift error during long-distance RGB-D tracking iteratively. Experiments were conducted using data sequences collected by commercial Structure Sensors. The results verify that the geometric integration of hybrid correspondences effectively decreases the drift error and improves mapping accuracy. Furthermore, the model enables a comparative and synergistic use of datasets, including both 2D and 3D features.
2013-04-01
configurations in the top 25% and 30% of all unbiased simulations were also projected onto the energy globe [Fig. S3(B)]. These results show that each...Coarse-grained simulations of protein-protein association: An energy landscape perspective. Biophys J 103(4):837 – 845, 2012. 27. Yang, S., Onuchic, J. N...S1. The projected snapshots were taken only from free and released portions of PPR simulation cycles with a energy cutoff of ELBD−DBD < 0 kcal/mol and
Climate Change Impact Assessment of Hydro-Climate in Southern Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Ercan, A.; Ishida, K.; Kavvas, M. L.; Chen, Z. R.; Jang, S.; Amin, M. Z. M.; Shaaban, A. J.
2017-12-01
Impacts of climate change on the hydroclimate of the coastal region in the south of Peninsular Malaysia in the 21st century was assessed by means of a regional climate model utilizing an ensemble of 15 different future climate realizations. Coarse resolution Global Climate Models' future projections covering four emission scenarios based on Coupled Model Intercomparison Project phase 3 (CMIP3) datasets were dynamically downscaled to 6 km resolution over the study area. The analyses were made in terms of rainfall, air temperature, evapotranporation, and soil water storage.
The first ISLSCP field experiment (FIFE). [International Satellite Land Surface Climatology Project
NASA Technical Reports Server (NTRS)
Sellers, P. J.; Hall, F. G.; Asrar, G.; Strebel, D. E.; Murphy, R. E.
1988-01-01
The background and planning of the first International Satellite Land Surface Climatology Project (ISLSCP) field experiment (FIFE) are discussed. In FIFE, the NOAA series of satellites and GOES will be used to provide a moderate-temporal resolution coarse-spatial resolution data set, with SPOT and aircraft data providing the high-spatial resolution pointable-instrument capability. The paper describes the experiment design, the measurement strategy, the configuration of the site of the experiment (which will be at and around the Konza prairie near Manhattan, Kansas), and the experiment's operations and execution.
A multi-temporal analysis approach for land cover mapping in support of nuclear incident response
NASA Astrophysics Data System (ADS)
Sah, Shagan; van Aardt, Jan A. N.; McKeown, Donald M.; Messinger, David W.
2012-06-01
Remote sensing can be used to rapidly generate land use maps for assisting emergency response personnel with resource deployment decisions and impact assessments. In this study we focus on constructing accurate land cover maps to map the impacted area in the case of a nuclear material release. The proposed methodology involves integration of results from two different approaches to increase classification accuracy. The data used included RapidEye scenes over Nine Mile Point Nuclear Power Station (Oswego, NY). The first step was building a coarse-scale land cover map from freely available, high temporal resolution, MODIS data using a time-series approach. In the case of a nuclear accident, high spatial resolution commercial satellites such as RapidEye or IKONOS can acquire images of the affected area. Land use maps from the two image sources were integrated using a probability-based approach. Classification results were obtained for four land classes - forest, urban, water and vegetation - using Euclidean and Mahalanobis distances as metrics. Despite the coarse resolution of MODIS pixels, acceptable accuracies were obtained using time series features. The overall accuracies using the fusion based approach were in the neighborhood of 80%, when compared with GIS data sets from New York State. The classifications were augmented using this fused approach, with few supplementary advantages such as correction for cloud cover and independence from time of year. We concluded that this method would generate highly accurate land maps, using coarse spatial resolution time series satellite imagery and a single date, high spatial resolution, multi-spectral image.
Andersen, Zorana J; Stafoggia, Massimo; Weinmayr, Gudrun; Pedersen, Marie; Galassi, Claudia; Jørgensen, Jeanette T; Oudin, Anna; Forsberg, Bertil; Olsson, David; Oftedal, Bente; Aasvang, Gunn Marit; Aamodt, Geir; Pyko, Andrei; Pershagen, Göran; Korek, Michal; De Faire, Ulf; Pedersen, Nancy L; Östenson, Claes-Göran; Fratiglioni, Laura; Eriksen, Kirsten T; Tjønneland, Anne; Peeters, Petra H; Bueno-de-Mesquita, Bas; Plusquin, Michelle; Key, Timothy J; Jaensch, Andrea; Nagel, Gabriele; Lang, Alois; Wang, Meng; Tsai, Ming-Yi; Fournier, Agnes; Boutron-Ruault, Marie-Christine; Baglietto, Laura; Grioni, Sara; Marcon, Alessandro; Krogh, Vittorio; Ricceri, Fulvio; Sacerdote, Carlotta; Migliore, Enrica; Tamayo-Uria, Ibon; Amiano, Pilar; Dorronsoro, Miren; Vermeulen, Roel; Sokhi, Ranjeet; Keuken, Menno; de Hoogh, Kees; Beelen, Rob; Vineis, Paolo; Cesaroni, Giulia; Brunekreef, Bert; Hoek, Gerard; Raaschou-Nielsen, Ole
2017-10-13
Epidemiological evidence on the association between ambient air pollution and breast cancer risk is inconsistent. We examined the association between long-term exposure to ambient air pollution and incidence of postmenopausal breast cancer in European women. In 15 cohorts from nine European countries, individual estimates of air pollution levels at the residence were estimated by standardized land-use regression models developed within the European Study of Cohorts for Air Pollution Effects (ESCAPE) and Transport related Air Pollution and Health impacts – Integrated Methodologies for Assessing Particulate Matter (TRANSPHORM) projects: particulate matter (PM) ≤2.5μm, ≤10μm, and 2.5–10μm in diameter (PM 2.5 , PM 10 , and PM coarse , respectively); PM 2.5 absorbance; nitrogen oxides (NO 2 and NO x ); traffic intensity; and elemental composition of PM. We estimated cohort-specific associations between breast cancer and air pollutants using Cox regression models, adjusting for major lifestyle risk factors, and pooled cohort-specific estimates using random-effects meta-analyses. Of 74,750 postmenopausal women included in the study, 3,612 developed breast cancer during 991,353 person-years of follow-up. We found positive and statistically insignificant associations between breast cancer and PM 2.5 {hazard ratio (HR)=1.08 [95% confidence interval (CI): 0.77, 1.51] per 5 μg/m 3 }, PM 10 [1.07 (95% CI: 0.89, 1.30) per 10 μg/m 3 ], PM coarse [1.20 (95% CI: 0.96, 1.49 per 5 μg/m 3 ], and NO 2 [1.02 (95% CI: 0.98, 1.07 per 10 μg/m 3 ], and a statistically significant association with NO x [1.04 (95% CI: 1.00, 1.08) per 20 μg/m 3 , p =0.04]. We found suggestive evidence of an association between ambient air pollution and incidence of postmenopausal breast cancer in European women. https://doi.org/10.1289/EHP1742.
Beyond scenario planning: projecting the future using models at Wind Cave National Park (USA)
NASA Astrophysics Data System (ADS)
King, D. A.; Bachelet, D. M.; Symstad, A. J.
2011-12-01
Scenario planning has been used by the National Park Service as a tool for natural resource management planning in the face of climate change. Sets of plausible but divergent future scenarios are constructed from available information and expert opinion and serve as starting point to derive climate-smart management strategies. However, qualitative hypotheses about how systems would react to a particular set of conditions assumed from coarse scale climate projections may lack the scientific rigor expected from a federal agency. In an effort to better assess the range of likely futures at Wind Cave National Park, a project was conceived to 1) generate high resolution historic and future climate time series to identify local weather patterns that may or may not persist, 2) simulate the hydrological cycle in this geologically varied landscape and its response to future climate, 3) project vegetation dynamics and ensuing changes in the biogeochemical cycles given grazing and fire disturbances under new climate conditions, and 4) synthesize and compare results with those from the scenario planning exercise. In this framework, we tested a dynamic global vegetation model against local information on vegetation cover, disturbance history and stream flow to better understand the potential resilience of these ecosystems to climate change. We discuss the tradeoffs between a coarse scale application of the model showing regional trends with limited ability to project the fine scale mosaic of vegetation at Wind Cave, and a finer scale approach that can account for local slope effects on water balance and better assess the vulnerability of landscape facets, but requires more intensive data acquisition. We elaborate on the potential for sharing information between models to mitigate the often-limited treatment of biological feedbacks in the physical representations of soil and atmospheric processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maclaurin, Galen; Sengupta, Manajit; Xie, Yu
A significant source of bias in the transposition of global horizontal irradiance to plane-of-array (POA) irradiance arises from inaccurate estimations of surface albedo. The current physics-based model used to produce the National Solar Radiation Database (NSRDB) relies on model estimations of surface albedo from a reanalysis climatalogy produced at relatively coarse spatial resolution compared to that of the NSRDB. As an input to spectral decomposition and transposition models, more accurate surface albedo data from remotely sensed imagery at finer spatial resolutions would improve accuracy in the final product. The National Renewable Energy Laboratory (NREL) developed an improved white-sky (bi-hemispherical reflectance)more » broadband (0.3-5.0 ..mu..m) surface albedo data set for processing the NSRDB from two existing data sets: a gap-filled albedo product and a daily snow cover product. The Moderate Resolution Imaging Spectroradiometer (MODIS) sensors onboard the Terra and Aqua satellites have provided high-quality measurements of surface albedo at 30 arc-second spatial resolution and 8-day temporal resolution since 2001. The high spatial and temporal resolutions and the temporal coverage of the MODIS sensor will allow for improved modeling of POA irradiance in the NSRDB. However, cloud and snow cover interfere with MODIS observations of ground surface albedo, and thus they require post-processing. The MODIS production team applied a gap-filling methodology to interpolate observations obscured by clouds or ephemeral snow. This approach filled pixels with ephemeral snow cover because the 8-day temporal resolution is too coarse to accurately capture the variability of snow cover and its impact on albedo estimates. However, for this project, accurate representation of daily snow cover change is important in producing the NSRDB. Therefore, NREL also used the Integrated Multisensor Snow and Ice Mapping System data set, which provides daily snow cover observations of the Northern Hemisphere for the temporal extent of the NSRDB (1998-2015). We provide a review of validation studies conducted on these two products and describe the methodology developed by NREL to remap the data products to the NSRDB grid and integrate them into a seamless daily data set.« less
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-09-01
Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in exascale simulations.
Ultra compact multitip scanning tunneling microscope with a diameter of 50 mm.
Cherepanov, Vasily; Zubkov, Evgeny; Junker, Hubertus; Korte, Stefan; Blab, Marcus; Coenen, Peter; Voigtländer, Bert
2012-03-01
We present a multitip scanning tunneling microscope (STM) where four independent STM units are integrated on a diameter of 50 mm. The coarse positioning of the tips is done under the control of an optical microscope or scanning electron microscopy in vacuum. The heart of this STM is a new type of piezoelectric coarse approach called KoalaDrive. The compactness of the KoalaDrive allows building a four-tip STM as small as a single-tip STM with a drift of less than 0.2 nm/min at room temperature and lowest resonance frequencies of 2.5 kHz (xy) and 5.5 kHz (z). We present as examples of the performance of the multitip STM four point measurements of silicide nanowires and graphene.
High speed true random number generator with a new structure of coarse-tuning PDL in FPGA
NASA Astrophysics Data System (ADS)
Fang, Hongzhen; Wang, Pengjun; Cheng, Xu; Zhou, Keji
2018-03-01
A metastability-based TRNG (true random number generator) is presented in this paper, and implemented in FPGA. The metastable state of a D flip-flop is tunable through a two-stage PDL (programmable delay line). With the proposed coarse-tuning PDL structure, the TRNG core does not require extra placement and routing to ensure its entropy. Furthermore, the core needs fewer stages of coarse-tuning PDL at higher operating frequency, and thus saves more resources in FPGA. The designed TRNG achieves 25 Mbps @ 100 MHz throughput after proper post-processing, which is several times higher than other previous TRNGs based on FPGA. Moreover, the robustness of the system is enhanced with the adoption of a feedback system. The quality of the designed TRNG is verified by NIST (National Institute of Standards and Technology) and also accepted by class P1 of the AIS-20/31 test suite. Project supported by the S&T Plan of Zhejiang Provincial Science and Technology Department (No. 2016C31078), the National Natural Science Foundation of China (Nos. 61574041, 61474068, 61234002), and the K.C. Wong Magna Fund in Ningbo University, China.
2006-11-10
features based on shape are easy to come by. The Great Pyramids at Giza are instantly identified from space, even at the very coarse spatial... Pyramids at Giza , Egypt, are recognized by their triangular faces in this 1 m resolution Ikonos image, as are nearby rectangular tombs (credit: Space
Population exposure to ambient particulate matter (PM) has received considerable attention due to the association between ambient particulate concentrations and mortality. Current toxicological and epidemiological studies and controlled human and animal exposures suggest that a...
Assessing Australian Rainfall Projections in Two Model Resolutions
NASA Astrophysics Data System (ADS)
Taschetto, A.; Haarsma, R. D.; Sen Gupta, A.
2016-02-01
Australian climate is projected to change with increases in greenhouse gases. The IPCC reports an increase in extreme daily rainfall across the country. At the same time, mean rainfall over southeast Australia is projected to reduce during austral winter, but to increase during austral summer, mainly associated with changes in the surrounding oceans. Climate models agree better on the future reduction of average rainfall over the southern regions of Australia compared to the increase in extreme rainfall events. One of the reasons for this disagreement may be related to climate model limitations in simulating the observed mechanisms associated with the mid-latitude weather systems, in particular due to coarse model resolutions. In this study we investigate how changes in sea surface temperature (SST) affect Australian mean and extreme rainfall under global warming, using a suite of numerical experiments at two model resolutions: about 126km (T159) and 25km (T799). The numerical experiments are performed with the earth system model EC-EARTH. Two 6-member ensembles are produced for the present day conditions and a future scenario. The present day ensemble is forced with the observed daily SST from the NOAA National Climatic Data Center from 2002 to 2006. The future scenario simulation is integrated from 2094 to 2098 using the present day SST field added onto the future SST change created from a 17-member ensemble based on the RCP4.5 scenario. Preliminary results show an increase in extreme rainfall events over Tasmania associated with enhanced convection driven by the Tasman Sea warming. We will further discuss how the projected changes in SST will impact the southern mid-latitude weather systems that ultimately affect Australian rainfall.
NMME Monthly / Seasonal Forecasts for NASA SERVIR Applications Science
NASA Astrophysics Data System (ADS)
Robertson, F. R.; Roberts, J. B.
2014-12-01
This work details use of the North American Multi-Model Ensemble (NMME) experimental forecasts as drivers for Decision Support Systems (DSSs) in the NASA / USAID initiative, SERVIR (a Spanish acronym meaning "to serve"). SERVIR integrates satellite observations, ground-based data and forecast models to monitor and forecast environmental changes and to improve response to natural disasters. Through the use of DSSs whose "front ends" are physically based models, the SERVIR activity provides a natural testbed to determine the extent to which NMME monthly to seasonal projections enable scientists, educators, project managers and policy implementers in developing countries to better use probabilistic outlooks of seasonal hydrologic anomalies in assessing agricultural / food security impacts, water availability, and risk to societal infrastructure. The multi-model NMME framework provides a "best practices" approach to probabilistic forecasting. The NMME forecasts are generated at resolution more coarse than that required to support DSS models; downscaling in both space and time is necessary. The methodology adopted here applied model output statistics where we use NMME ensemble monthly projections of sea-surface temperature (SST) and precipitation from 30 years of hindcasts with observations of precipitation and temperature for target regions. Since raw model forecasts are well-known to have structural biases, a cross-validated multivariate regression methodology (CCA) is used to link the model projected states as predictors to the predictands of the target region. The target regions include a number of basins in East and South Africa as well as the Ganges / Baramaputra / Meghna basin complex. The MOS approach used address spatial downscaling. Temporal disaggregation of monthly seasonal forecasts is achieved through use of a tercile bootstrapping approach. We interpret the results of these studies, the levels of skill by several metrics, and key uncertainties.
NMME Monthly / Seasonal Forecasts for NASA SERVIR Applications Science
NASA Technical Reports Server (NTRS)
Robertson, Franklin R.; Roberts, Jason B.
2014-01-01
This work details use of the North American Multi-Model Ensemble (NMME) experimental forecasts as drivers for Decision Support Systems (DSSs) in the NASA / USAID initiative, SERVIR (a Spanish acronym meaning "to serve"). SERVIR integrates satellite observations, ground-based data and forecast models to monitor and forecast environmental changes and to improve response to natural disasters. Through the use of DSSs whose "front ends" are physically based models, the SERVIR activity provides a natural testbed to determine the extent to which NMME monthly to seasonal projections enable scientists, educators, project managers and policy implementers in developing countries to better use probabilistic outlooks of seasonal hydrologic anomalies in assessing agricultural / food security impacts, water availability, and risk to societal infrastructure. The multi-model NMME framework provides a "best practices" approach to probabilistic forecasting. The NMME forecasts are generated at resolution more coarse than that required to support DSS models; downscaling in both space and time is necessary. The methodology adopted here applied model output statistics where we use NMME ensemble monthly projections of sea-surface temperature (SST) and precipitation from 30 years of hindcasts with observations of precipitation and temperature for target regions. Since raw model forecasts are well-known to have structural biases, a cross-validated multivariate regression methodology (CCA) is used to link the model projected states as predictors to the predictands of the target region. The target regions include a number of basins in East and South Africa as well as the Ganges / Baramaputra / Meghna basin complex. The MOS approach used address spatial downscaling. Temporal disaggregation of monthly seasonal forecasts is achieved through use of a tercile bootstrapping approach. We interpret the results of these studies, the levels of skill by several metrics, and key uncertainties.
Automatic bone segmentation in knee MR images using a coarse-to-fine strategy
NASA Astrophysics Data System (ADS)
Park, Sang Hyun; Lee, Soochahn; Yun, Il Dong; Lee, Sang Uk
2012-02-01
Segmentation of bone and cartilage from a three dimensional knee magnetic resonance (MR) image is a crucial element in monitoring and understanding of development and progress of osteoarthritis. Until now, various segmentation methods have been proposed to separate the bone from other tissues, but it still remains challenging problem due to different modality of MR images, low contrast between bone and tissues, and shape irregularity. In this paper, we present a new fully-automatic segmentation method of bone compartments using relevant bone atlases from a training set. To find the relevant bone atlases and obtain the segmentation, a coarse-to-fine strategy is proposed. In the coarse step, the best atlas among the training set and an initial segmentation are simultaneously detected using branch and bound tree search. Since the best atlas in the coarse step is not accurately aligned, all atlases from the training set are aligned to the initial segmentation, and the best aligned atlas is selected in the middle step. Finally, in the fine step, segmentation is conducted as adaptively integrating shape of the best aligned atlas and appearance prior based on characteristics of local regions. For experiment, femur and tibia bones of forty test MR images are segmented by the proposed method using sixty training MR images. Experimental results show that a performance of the segmentation and the registration becomes better as going near the fine step, and the proposed method obtain the comparable performance with the state-of-the-art methods.
Matthew B. Russell; Grant M. Domke; Christopher W. Woodall; Anthony W. D' Amato
2015-01-01
Background: Refined estimation of carbon (C) stocks within forest ecosystems is a critical component of efforts to reduce greenhouse gas emissions and mitigate the effects of projected climate change through forest C management. Specifically, belowground C stocks are currently estimated in the United States' national greenhouse gas inventory (US NGHGI) using...
Final Report, DE-FG01-06ER25718 Domain Decomposition and Parallel Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widlund, Olof B.
2015-06-09
The goal of this project is to develop and improve domain decomposition algorithms for a variety of partial differential equations such as those of linear elasticity and electro-magnetics.These iterative methods are designed for massively parallel computing systems and allow the fast solution of the very large systems of algebraic equations that arise in large scale and complicated simulations. A special emphasis is placed on problems arising from Maxwell's equation. The approximate solvers, the preconditioners, are combined with the conjugate gradient method and must always include a solver of a coarse model in order to have a performance which is independentmore » of the number of processors used in the computer simulation. A recent development allows for an adaptive construction of this coarse component of the preconditioner.« less
Edwards, Cathrina H; Grundy, Myriam ML; Grassby, Terri; Vasilopoulou, Dafni; Frost, Gary S; Butterworth, Peter J; Berry, Sarah EE; Sanderson, Jeremy; Ellis, Peter R
2015-01-01
Background: Cereal crops, particularly wheat, are a major dietary source of starch, and the bioaccessibility of starch has implications for postprandial glycemia. The structure and properties of plant foods have been identified as critical factors in influencing nutrient bioaccessibility; however, the physical and biochemical disassembly of cereal food during digestion has not been widely studied. Objectives: The aims of this study were to compare the effects of 2 porridge meals prepared from wheat endosperm with different degrees of starch bioaccessibility on postprandial metabolism (e.g., glycemia) and to gain insight into the structural and biochemical breakdown of the test meals during gastroileal transit. Design: A randomized crossover trial in 9 healthy ileostomy participants was designed to compare the effects of 55 g starch, provided as coarse (2-mm particles) or smooth (<0.2-mm particles) wheat porridge, on postprandial changes in blood glucose, insulin, C-peptide, lipids, and gut hormones and on the resistant starch (RS) content of ileal effluent. Undigested food in the ileal output was examined microscopically to identify cell walls and encapsulated starch. Results: Blood glucose, insulin, C-peptide, and glucose-dependent insulinotropic polypeptide concentrations were significantly lower (i.e., 33%, 43%, 40%, and 50% lower 120-min incremental AUC, respectively) after consumption of the coarse porridge than after the smooth porridge (P < 0.01). In vitro, starch digestion was slower in the coarse porridge than in the smooth porridge (33% less starch digested at 90 min, P < 0.05, paired t test). In vivo, the structural integrity of coarse particles (∼2 mm) of wheat endosperm was retained during gastroileal transit. Microscopic examination revealed a progressive loss of starch from the periphery toward the particle core. The structure of the test meal had no effect on the amount or pattern of RS output. Conclusion: The structural integrity of wheat endosperm is largely retained during gastroileal digestion and has a primary role in influencing the rate of starch amylolysis and, consequently, postprandial metabolism. This trial was registered at isrctn.org as ISRCTN40517475. PMID:26333512
Evaluation of an improved finite-element thermal stress calculation technique
NASA Technical Reports Server (NTRS)
Camarda, C. J.
1982-01-01
A procedure for generating accurate thermal stresses with coarse finite element grids (Ojalvo's method) is described. The procedure is based on the observation that for linear thermoelastic problems, the thermal stresses may be envisioned as being composed of two contributions; the first due to the strains in the structure which depend on the integral of the temperature distribution over the finite element and the second due to the local variation of the temperature in the element. The first contribution can be accurately predicted with a coarse finite-element mesh. The resulting strain distribution can then be combined via the constitutive relations with detailed temperatures from a separate thermal analysis. The result is accurate thermal stresses from coarse finite element structural models even where the temperature distributions have sharp variations. The range of applicability of the method for various classes of thermostructural problems such as in-plane or bending type problems and the effect of the nature of the temperature distribution and edge constraints are addressed. Ojalvo's method is used in conjunction with the SPAR finite element program. Results are obtained for rods, membranes, a box beam and a stiffened panel.
Registration of Laser Scanning Point Clouds: A Review.
Cheng, Liang; Chen, Song; Liu, Xiaoqiang; Xu, Hao; Wu, Yang; Li, Manchun; Chen, Yanming
2018-05-21
The integration of multi-platform, multi-angle, and multi-temporal LiDAR data has become important for geospatial data applications. This paper presents a comprehensive review of LiDAR data registration in the fields of photogrammetry and remote sensing. At present, a coarse-to-fine registration strategy is commonly used for LiDAR point clouds registration. The coarse registration method is first used to achieve a good initial position, based on which registration is then refined utilizing the fine registration method. According to the coarse-to-fine framework, this paper reviews current registration methods and their methodologies, and identifies important differences between them. The lack of standard data and unified evaluation systems is identified as a factor limiting objective comparison of different methods. The paper also describes the most commonly-used point cloud registration error analysis methods. Finally, avenues for future work on LiDAR data registration in terms of applications, data, and technology are discussed. In particular, there is a need to address registration of multi-angle and multi-scale data from various newly available types of LiDAR hardware, which will play an important role in diverse applications such as forest resource surveys, urban energy use, cultural heritage protection, and unmanned vehicles.
Registration of Laser Scanning Point Clouds: A Review
Cheng, Liang; Chen, Song; Xu, Hao; Wu, Yang; Li, Manchun
2018-01-01
The integration of multi-platform, multi-angle, and multi-temporal LiDAR data has become important for geospatial data applications. This paper presents a comprehensive review of LiDAR data registration in the fields of photogrammetry and remote sensing. At present, a coarse-to-fine registration strategy is commonly used for LiDAR point clouds registration. The coarse registration method is first used to achieve a good initial position, based on which registration is then refined utilizing the fine registration method. According to the coarse-to-fine framework, this paper reviews current registration methods and their methodologies, and identifies important differences between them. The lack of standard data and unified evaluation systems is identified as a factor limiting objective comparison of different methods. The paper also describes the most commonly-used point cloud registration error analysis methods. Finally, avenues for future work on LiDAR data registration in terms of applications, data, and technology are discussed. In particular, there is a need to address registration of multi-angle and multi-scale data from various newly available types of LiDAR hardware, which will play an important role in diverse applications such as forest resource surveys, urban energy use, cultural heritage protection, and unmanned vehicles. PMID:29883397
NASA Astrophysics Data System (ADS)
Collins, M. J.; Aponte Clarke, G.; Baeder, C.; McCaw, D.; Royte, J.; Saunders, R.; Sheehan, T.
2012-12-01
The Penobscot River Restoration Project aims to improve aquatic connectivity in New England's second largest watershed ( 22,000 km2) by removing the two lowermost, mainstem dams and bypassing a third dam on a principal tributary upstream. Project objectives include: restoring unobstructed access to the entire historic riverine range for five lower river diadromous species including Atlantic and shortnose sturgeon; significantly improving access to upstream habitat for six upper river diadromous species including Atlantic salmon; reconnecting trophic linkages between headwater areas and the Gulf of Maine; restoring fluvial processes to the former impoundments; improving recreational and Penobscot Nation cultural opportunities; and maintaining basin-wide hydropower output. The project is expected to have landscape-scale benefits and the need for a significant investment in long-term monitoring and evaluation to formally quantify ecosystem response has been recognized. A diverse group of federal, state, tribal, NGO, and academic partners has developed a long-term monitoring and evaluation program composed of nine studies that began in 2009. Including American Recovery and Reinvestment Act (ARRA) funding that leveraged partner contributions, we have invested nearly $2M to date in pre- and post-removal investigations that evaluate geomorphology/bed sediment, water quality, wetlands, and fisheries. Given the number of affected diadromous species and the diversity of their life histories, we have initiated six distinct, but related, fisheries investigations to document these expected changes: Atlantic salmon upstream and downstream passage efficiency using passive integrated transponder (PIT) and acoustic telemetry; fish community structure via an index of biotic integrity (IBI); total diadromous fish biomass through hydroacoustics; shortnose sturgeon spawning and habitat use via active and passive acoustic telemetry; and freshwater-marine food web interactions by examining stable nutrient isotopes in fish tissue. Here we summarize the multidisciplinary studies we are undertaking and present some preliminary results from three years of pre-removal study. We highlight our stream channel geometry and bed sediment grain size investigations that reveal impoundments bedded primarily by coarse materials and storing very little sediment, circumstances that are influenced by the reach's geology and late Quaternary history. The pre-removal data from our nine studies help us characterize the impounded and fragmented ecosystem on the eve of dam removal and help us further develop and refine testable hypotheses for ecosystem response to the project.
Mai, B; Deng, X; Xia, X; Che, H; Guo, J; Liu, X; Zhu, J; Ling, C
2018-05-01
The sun-photometer data from 2011 to 2013 at Panyu site (Panyu) and from 2007 to 2013 at Dongguan site (Dg) in the Pearl River Delta region, were used for the retrieving of the aerosol optical depth (AOD), single scattering albedo (SSA), Ångström exponent (AE) and volume size distribution of coarse- and fine-mode particles. The coarse-mode particles presented low AOD (ranging from 0.05±0.03 to 0.08±0.05) but a strong absorption property (SSA ranged from 0.70±0.03 to 0.90±0.02) for the wavelengths between 440 and 1020nm. However, these coarse particles accounted for <10% of the total particles. The AOD of fine particles (AODf) was over 3 times as large as that of coarse particles (AODc). The fine particles SSA (SSAf) generally decreased as a function of wavelength, and the relatively lower SSAf value in summer was likely to be due to the stronger solar radiation and higher temperature. More than 70% of the aerosols at Panyu site were dominated by fine-mode absorbing particles, whereas about 70% of the particles at Dg site were attributed to fine-mode scattering particles. The differences of the aerosol optical properties between the two sites are likely associated with local emissions of the light-absorbing carbonaceous aerosols and the scattering aerosols (e.g., sulfate and nitrate particles) caused by the gas-phase oxidation of gaseous precursors (e.g., SO 2 and NO 2 ). The size distribution exhibited bimodal structures in which the accumulation mode was predominant. The fine-mode volume showed positive dependence on AOD (500nm), and the growth of peak value of the fine-mode volume was higher than that of the coarse volume. Both the AOD and SSA increased with increasing relative humidity (RH), while the AE decreased with increasing RH. These correlations imply that the aerosol properties are greatly modified by condensation growth. Copyright © 2017 Elsevier B.V. All rights reserved.
ProtoMD: A prototyping toolkit for multiscale molecular dynamics
NASA Astrophysics Data System (ADS)
Somogyi, Endre; Mansour, Andrew Abi; Ortoleva, Peter J.
2016-05-01
ProtoMD is a toolkit that facilitates the development of algorithms for multiscale molecular dynamics (MD) simulations. It is designed for multiscale methods which capture the dynamic transfer of information across multiple spatial scales, such as the atomic to the mesoscopic scale, via coevolving microscopic and coarse-grained (CG) variables. ProtoMD can be also be used to calibrate parameters needed in traditional CG-MD methods. The toolkit integrates 'GROMACS wrapper' to initiate MD simulations, and 'MDAnalysis' to analyze and manipulate trajectory files. It facilitates experimentation with a spectrum of coarse-grained variables, prototyping rare events (such as chemical reactions), or simulating nanocharacterization experiments such as terahertz spectroscopy, AFM, nanopore, and time-of-flight mass spectroscopy. ProtoMD is written in python and is freely available under the GNU General Public License from github.com/CTCNano/proto_md.
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Govindaraju, Ravi C.
2002-01-01
The variable-resolution stretched-grid (SG) GEOS (Goddard Earth Observing System) GCM has been used for limited ensemble integrations with a relatively coarse, 60 to 100 km, regional resolution over the U.S. The experiments have been run for the 12-year period, 1987-1998, that includes the recent ENSO cycles. Initial conditions 1-2 days apart are used for ensemble members. The goal of the experiments is analyzing the long-term SG-GCM ensemble integrations in terms of their potential in reducing the uncertainties of regional climate simulation while producing realistic mesoscales. The ensemble integration results are analyzed for both prognostic and diagnostic fields. A special attention is devoted to analyzing the variability of precipitation over the U.S. The internal variability of the SG-GCM has been assessed. The ensemble means appear to be closer to the verifying analyses than the individual ensemble members. The ensemble means capture realistic mesoscale patterns, especially those of induced by orography. Two ENSO cycles have been analyzed in terms their impact on the U.S. climate, especially on precipitation. The ability of the SG-GCM simulations to produce regional climate anomalies has been confirmed. However, the optimal size of the ensembles depending on fine regional resolution used, is still to be determined. The SG-GCM ensemble simulations are performed as a preparation or a preliminary stage for the international SGMIP (Stretched-Grid Model Intercomparison Project) that is under way with participation of the major centers and groups employing the SG-approach for regional climate modeling.
Improving membrane protein expression by optimizing integration efficiency
2017-01-01
The heterologous overexpression of integral membrane proteins in Escherichia coli often yields insufficient quantities of purifiable protein for applications of interest. The current study leverages a recently demonstrated link between co-translational membrane integration efficiency and protein expression levels to predict protein sequence modifications that improve expression. Membrane integration efficiencies, obtained using a coarse-grained simulation approach, robustly predicted effects on expression of the integral membrane protein TatC for a set of 140 sequence modifications, including loop-swap chimeras and single-residue mutations distributed throughout the protein sequence. Mutations that improve simulated integration efficiency were 4-fold enriched with respect to improved experimentally observed expression levels. Furthermore, the effects of double mutations on both simulated integration efficiency and experimentally observed expression levels were cumulative and largely independent, suggesting that multiple mutations can be introduced to yield higher levels of purifiable protein. This work provides a foundation for a general method for the rational overexpression of integral membrane proteins based on computationally simulated membrane integration efficiencies. PMID:28918393
Development of a landscape integrity model framework to support regional conservation planning.
Walston, Leroy J; Hartmann, Heidi M
2018-01-01
Land managers increasingly rely upon landscape assessments to understand the status of natural resources and identify conservation priorities. Many of these landscape planning efforts rely on geospatial models that characterize the ecological integrity of the landscape. These general models utilize measures of habitat disturbance and human activity to map indices of ecological integrity. We built upon these modeling frameworks by developing a Landscape Integrity Index (LII) model using geospatial datasets of the human footprint, as well as incorporation of other indicators of ecological integrity such as biodiversity and vegetation departure. Our LII model serves as a general indicator of ecological integrity in a regional context of human activity, biodiversity, and change in habitat composition. We also discuss the application of the LII framework in two related coarse-filter landscape conservation approaches to expand the size and connectedness of protected areas as regional mitigation for anticipated land-use changes.
Development of a landscape integrity model framework to support regional conservation planning
Hartmann, Heidi M.
2018-01-01
Land managers increasingly rely upon landscape assessments to understand the status of natural resources and identify conservation priorities. Many of these landscape planning efforts rely on geospatial models that characterize the ecological integrity of the landscape. These general models utilize measures of habitat disturbance and human activity to map indices of ecological integrity. We built upon these modeling frameworks by developing a Landscape Integrity Index (LII) model using geospatial datasets of the human footprint, as well as incorporation of other indicators of ecological integrity such as biodiversity and vegetation departure. Our LII model serves as a general indicator of ecological integrity in a regional context of human activity, biodiversity, and change in habitat composition. We also discuss the application of the LII framework in two related coarse-filter landscape conservation approaches to expand the size and connectedness of protected areas as regional mitigation for anticipated land-use changes. PMID:29614093
NASA Astrophysics Data System (ADS)
Kassianov, E.; Pekour, M. S.; Flynn, C. J.; Berg, L. K.; Beranek, J.; Zelenyuk, A.; Zhao, C.; Leung, L. R.; Ma, P. L.; Riihimaki, L.; Fast, J. D.; Barnard, J.; Hallar, G. G.; McCubbin, I.; Eloranta, E. W.; McComiskey, A. C.; Rasch, P. J.
2017-12-01
Understanding the effects of dust on the regional and global climate requires detailed information on particle size distributions and their changes with distance from the source. Awareness is now growing about the tendency of the dust coarse mode with moderate ( 3.5 µm) volume median diameter (VMD) to be rather insensitive to complex removal processes associated with long-range transport of dust from the main sources. Our study, with a focus on the transpacific transport of dust, demonstrates that the impact of coarse mode aerosol (VMD 3µm) is well defined at the high-elevation mountain-top Storm Peak Laboratory (SPL, about 3.2 km MSL) and nearby Atmospheric Radiation Measurement (ARM) Climate Research Facility Mobile Facility (AMF) during March 2011. Significant amounts of coarse mode aerosol are also found at the nearest Aerosol Robotic Network (AERONET) site. Outputs from the high-resolution Weather Research and Forecasting (WRF) Model coupled with chemistry (WRF-Chem) show that the major dust event is likely associated with transpacific transport of Asian and African plumes. Satellite data, including the Moderate Resolution Imaging Spectroradiometer (MODIS) and Multiangle Imaging SpectroRadiometer (MISR) aerosol optical depth (AOD) and plume height from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) lidar data provide the observational support of the WRF-Chem simulations. Our study complements previous findings by indicating that the quasi-static nature of the coarse mode appears to be a reasonable approximation for Asian and African dust despite expected frequent orographic precipitation over mountainous regions in the western United States.
Probing finite coarse-grained virtual Feynman histories with sequential weak values
NASA Astrophysics Data System (ADS)
Georgiev, Danko; Cohen, Eliahu
2018-05-01
Feynman's sum-over-histories formulation of quantum mechanics has been considered a useful calculational tool in which virtual Feynman histories entering into a coherent quantum superposition cannot be individually measured. Here we show that sequential weak values, inferred by consecutive weak measurements of projectors, allow direct experimental probing of individual virtual Feynman histories, thereby revealing the exact nature of quantum interference of coherently superposed histories. Because the total sum of sequential weak values of multitime projection operators for a complete set of orthogonal quantum histories is unity, complete sets of weak values could be interpreted in agreement with the standard quantum mechanical picture. We also elucidate the relationship between sequential weak values of quantum histories with different coarse graining in time and establish the incompatibility of weak values for nonorthogonal quantum histories in history Hilbert space. Bridging theory and experiment, the presented results may enhance our understanding of both weak values and quantum histories.
Three-dimensional elliptic grid generation for an F-16
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.
1988-01-01
A case history depicting the effort to generate a computational grid for the simulation of transonic flow about an F-16 aircraft at realistic flight conditions is presented. The flow solver for which this grid is designed is a zonal one, using the Reynolds averaged Navier-Stokes equations near the surface of the aircraft, and the Euler equations in regions removed from the aircraft. A body conforming global grid, suitable for the Euler equation, is first generated using 3-D Poisson equations having inhomogeneous terms modeled after the 2-D GRAPE code. Regions of the global grid are then designated for zonal refinement as appropriate to accurately model the flow physics. Grid spacing suitable for solution of the Navier-Stokes equations is generated in the refinement zones by simple subdivision of the given coarse grid intervals. That grid generation project is described, with particular emphasis on the global coarse grid.
NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT
NASA Astrophysics Data System (ADS)
Sohlberg, A.; Watabe, H.; Iida, H.
2008-07-01
Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.
NASA Astrophysics Data System (ADS)
Khalid, Faisal Sheikh; Azmi, Nurul Bazilah; Sumandi, Khairul Azwa Syafiq Mohd; Mazenan, Puteri Natasya
2017-10-01
Many construction and development activities today consume large amounts of concrete. The amount of construction waste is also increasing because of the demolition process. Much of this waste can be recycled to produce new products and increase the sustainability of construction projects. As recyclable construction wastes, concrete and ceramic can replace the natural aggregate in concrete because of their hard and strong physical properties. This research used 25%, 35%, and 45% recycled concrete aggregate (RCA) and ceramic waste as coarse aggregate in producing concrete. Several tests, such as concrete cube compression and splitting tensile tests, were also performed to determine and compare the mechanical properties of the recycled concrete with those of the normal concrete that contains 100% natural aggregate. The concrete containing 35% RCA and 35% ceramic waste showed the best properties compared with the normal concrete.
Time and space integrating acousto-optic folded spectrum processing for SETI
NASA Technical Reports Server (NTRS)
Wagner, K.; Psaltis, D.
1986-01-01
Time and space integrating folded spectrum techniques utilizing acousto-optic devices (AOD) as 1-D input transducers are investigated for a potential application as wideband, high resolution, large processing gain spectrum analyzers in the search for extra-terrestrial intelligence (SETI) program. The space integrating Fourier transform performed by a lens channels the coarse spectral components diffracted from an AOD onto an array of time integrating narrowband fine resolution spectrum analyzers. The pulsing action of a laser diode samples the interferometrically detected output, aliasing the fine resolution components to baseband, as required for the subsequent charge coupled devices (CCD) processing. The raster scan mechanism incorporated into the readout of the CCD detector array is used to unfold the 2-D transform, reproducing the desired high resolution Fourier transform of the input signal.
NASA Astrophysics Data System (ADS)
Seiler, C.; Zwiers, F. W.; Hodges, K. I.; Scinocca, J. F.
2018-01-01
Explosive extratropical cyclones (EETCs) are rapidly intensifying low pressure systems that generate severe weather along North America's Atlantic coast. Global climate models (GCMs) tend to simulate too few EETCs, perhaps partly due to their coarse horizontal resolution and poorly resolved moist diabatic processes. This study explores whether dynamical downscaling can reduce EETC frequency biases, and whether this affects future projections of storms along North America's Atlantic coast. A regional climate model (CanRCM4) is forced with the CanESM2 GCM for the periods 1981 to 2000 and 2081 to 2100. EETCs are tracked from relative vorticity using an objective feature tracking algorithm. CanESM2 simulates 38% fewer EETC tracks compared to reanalysis data, which is consistent with a negative Eady growth rate bias (-0.1 day^{-1}). Downscaling CanESM2 with CanRCM4 increases EETC frequency by one third, which reduces the frequency bias to -22%, and increases maximum EETC precipitation by 22%. Anthropogenic greenhouse gas forcing is projected to decrease EETC frequency (-15%, -18%) and Eady growth rate (-0.2 day^{-1}, -0.2 day^{-1}), and increase maximum EETC precipitation (46%, 52%) in CanESM2 and CanRCM4, respectively. The limited effect of dynamical downscaling on EETC frequency projections is consistent with the lack of impact on the maximum Eady growth rate. The coarse spatial resolution of GCMs presents an important limitation for simulating extreme ETCs, but Eady growth rate biases are likely just as relevant. Further bias reductions could be achieved by addressing processes that lead to an underestimation of lower tropospheric meridional temperature gradients.
NASA Technical Reports Server (NTRS)
Estes, Maurice G., Jr.; Crosson, William; Limaye, Ashutosh; Johnson, Hoyt; Quattrochi, Dale; Lapenta, William; Khan, Maudood
2006-01-01
Planning is an integral element of good management and necessary to anticipate events not merely respond to them. Projecting the quantity and spatial distribution of urban growth is essential to effectively plan for the delivery of city services and to evaluate potential environmental impacts. The major drivers of growth in large urban areas are increasing population, employment opportunities, and quality of life attractors such as a favorable climate and recreation opportunities. The spatial distribution of urban growth is dictated by the amount and location of developable land, topography, energy and water resources, transportation network, climate change, and the existing land use configuration. The Atlanta region is growing very rapidly both in population and the consumption of forestland or low-density residential development. Air pollution and water availability are significant ongoing environmental issues. The Prescott Spatial Growth Model (SGM) was used to make growth projections for the metropolitan Atlanta region to 2010,2020 and 2030 and results used for environmental assessment in both business as usual and smart growth scenarios. The Prescott SGM is a tool that uses an ESRI ArcView extension and can be applied at the parcel level or more coarse spatial scales and can accommodate a wide range of user inputs to develop any number of growth rules each of which can be weighted depending on growth assumptions. These projections were used in conjunction with meteorological and air quality models to evaluate future environmental impacts. This presentation will focus on the application of the SGM to the 13-County Atlanta Regional Commission planning jurisdiction as a case study. The SGM will be described, including how rule sets are developed and the decision process for allocation of future development to available land use categories. Data inputs required to effectively run the model will be discussed. Spatial growth projections for ten, twenty, and thirty year planning horizons will be presented and results discussed, including regional climate and air quality impacts.
Terrestrial Environmental Variables Derived From EOS Platform Sensors
NASA Technical Reports Server (NTRS)
Stadler, Stephen J.; Czajkowski, Kevin P.; Goward, Samuel N.; Xue, Yongkang
2001-01-01
The three main objectives of the overall project were: 1. Adaptation of environmental constraint methods to take advantage of EOS sensors, specifically, MODIS, ASTER, and Landsat-7, in addition to the PM AVHRR observations 2. Refinement of environmental constraint methods based on fundamental scientific knowledge. 3. Assessment of spatial scaling patterns in environmental constraint measurements to evaluate the potential biases and errors that occur when estimating regional and global-scale NPP patterns with moderate to coarse satellite observations. These goals were modified because, on one hand, MODIS data did not become available until after the first year of the project and because of project staffing issues at the University of Maryland., The OSU portion of the project contained a modest amount of funding and responsibility compared to the University of Maryland and the University of Toledo.
2003-12-01
Application to Land-Cover Change in the Brazilian Amazon ,” Remote Sensing of Environment, vol 52, pp 137-154. Anderson, G.L., J.D. Hanson, and R.H. Haas...FORTRAN, Cambridge University Press. Price, K.P., D. A. Pyke,and L. Mendes. 1992. “Shrub Dieback in a Semiarid Ecosystem; The Integration of Remote
NASA Technical Reports Server (NTRS)
Hsieh, Shang-Hsien
1993-01-01
The principal objective of this research is to develop, test, and implement coarse-grained, parallel-processing strategies for nonlinear dynamic simulations of practical structural problems. There are contributions to four main areas: finite element modeling and analysis of rotational dynamics, numerical algorithms for parallel nonlinear solutions, automatic partitioning techniques to effect load-balancing among processors, and an integrated parallel analysis system.
Interior Fluid Dynamics of Liquid-Filled Projectiles
1989-12-01
the Sandia code. The previous codes are primarily based on finite-difference approximations with relatively coarse grid and were designed without...exploits Chorin’s method of artificial compressibility. The steady solution at 11 X 24 X 21 grid points in r, 0, z-direction is obtained by integrating...differences in radial and axial direction and pseudoepectral differencing in the azimuthal direction. Nonuniform grids are introduced for increased
NASA Astrophysics Data System (ADS)
Steyn-Ross, Moira L.; Steyn-Ross, D. A.
2016-02-01
Mean-field models of the brain approximate spiking dynamics by assuming that each neuron responds to its neighbors via a naive spatial average that neglects local fluctuations and correlations in firing activity. In this paper we address this issue by introducing a rigorous formalism to enable spatial coarse-graining of spiking dynamics, scaling from the microscopic level of a single type 1 (integrator) neuron to a macroscopic assembly of spiking neurons that are interconnected by chemical synapses and nearest-neighbor gap junctions. Spiking behavior at the single-neuron scale ℓ ≈10 μ m is described by Wilson's two-variable conductance-based equations [H. R. Wilson, J. Theor. Biol. 200, 375 (1999), 10.1006/jtbi.1999.1002], driven by fields of incoming neural activity from neighboring neurons. We map these equations to a coarser spatial resolution of grid length B ℓ , with B ≫1 being the blocking ratio linking micro and macro scales. Our method systematically eliminates high-frequency (short-wavelength) spatial modes q ⃗ in favor of low-frequency spatial modes Q ⃗ using an adiabatic elimination procedure that has been shown to be equivalent to the path-integral coarse graining applied to renormalization group theory of critical phenomena. This bottom-up neural regridding allows us to track the percolation of synaptic and ion-channel noise from the single neuron up to the scale of macroscopic population-average variables. Anticipated applications of neural regridding include extraction of the current-to-firing-rate transfer function, investigation of fluctuation criticality near phase-transition tipping points, determination of spatial scaling laws for avalanche events, and prediction of the spatial extent of self-organized macrocolumnar structures. As a first-order exemplar of the method, we recover nonlinear corrections for a coarse-grained Wilson spiking neuron embedded in a network of identical diffusively coupled neurons whose chemical synapses have been disabled. Intriguingly, we find that reblocking transforms the original type 1 Wilson integrator into a type 2 resonator whose spike-rate transfer function exhibits abrupt spiking onset with near-vertical takeoff and chaotic dynamics just above threshold.
High-performance reactionless scan mechanism
NASA Technical Reports Server (NTRS)
Williams, Ellen I.; Summers, Richard T.; Ostaszewski, Miroslaw A.
1995-01-01
A high-performance reactionless scan mirror mechanism was developed for space applications to provide thermal images of the Earth. The design incorporates a unique mechanical means of providing reactionless operation that also minimizes weight, mechanical resonance operation to minimize power, combined use of a single optical encoder to sense coarse and fine angular position, and a new kinematic mount of the mirror. A flex pivot hardware failure and current project status are discussed.
Model Uncertainty and Test of a Segmented Mirror Telescope
2014-03-01
Optical Telescope project EOM: equation of motion FCA: fine control actuator FCD: Face-Centered Cubic Design FEA: finite element analysis FEM: finite...housed in a dark tent to isolate the telescope from stray light, air currents, or dust and other debris. However, the closed volume is prone to...is composed of six hexagonal segments that each have six coarse control actuators (CCA) for segment phasing control, three fine control actuators
Multi-Scale Simulation of High Energy Density Ionic Liquids
2007-06-19
and simulation of ionic liquids (ILs). A polarizable model was developed to simulate ILs more accurately at the atomistic level. A multiscale coarse...propellant, 1- hydroxyethyl-4-amino-1, 2, 4-triazolium nitrate (HEATN), were studied with the all-atom polarizable model. The mechanism suggested for HEATN...with this AFOSR-supported project, a polarizable forcefield for the ionic liquids such as 1-ethyl-3-methylimidazolium nitrate (EMIM*/NO3-) was
Evaluation of near surface ozone and particulate matter in air ...
In this study, techniques typically used for future air quality projections are applied to a historical 11-year period to assess the performance of the modeling system when the driving meteorological conditions are obtained using dynamical downscaling of coarse-scale fields without correcting toward higher-resolution observations. The Weather Research and Forecasting model and the Community Multiscale Air Quality model are used to simulate regional climate and air quality over the contiguous United States for 2000–2010. The air quality simulations for that historical period are then compared to observations from four national networks. Comparisons are drawn between defined performance metrics and other published modeling results for predicted ozone, fine particulate matter, and speciated fine particulate matter. The results indicate that the historical air quality simulations driven by dynamically downscaled meteorology are typically within defined modeling performance benchmarks and are consistent with results from other published modeling studies using finer-resolution meteorology. This indicates that the regional climate and air quality modeling framework utilized here does not introduce substantial bias, which provides confidence in the method’s use for future air quality projections. This paper shows that if emissions inputs and coarse-scale meteorological inputs are reasonably accurate, then air quality can be simulated with acceptable accuracy even wi
NASA Astrophysics Data System (ADS)
Vaudour, Emmanuelle; Gomez, Cécile; Fouad, Youssef; Gilliot, Jean-Marc; Lagacherie, Philippe
2017-04-01
This study aimed at exploring the potential of SENTINEL-2 (S2A) multispectral satellite images for predicting several topsoil properties in two contrasted environments: a temperate region marked by intensive annual crop cultivation and soils derived from either loess or colluvium and/or marine limestone or chalk for one part (Versailles Plain, 221 km2), and a Mediterranean region marked by vineyard cultivation and soils derived from either lacustrine limestone, calcareous sandstones, colluvium, or alluvial deposits (La Peyne catchment, 48 km2) for the other part. Two S2A images (acquired in mid-March 2016 over each site) were atmospherically corrected. Then NDVI was computed and thresholded (0.35) in order to extract bare soils. Prediction models of soil properties based on partial least squares regressions (PLSR) were built from S2A spectra of 72 and 143 sampling locations in the Versailles Plain and La Peyne catchment, respectively. Ten soil properties were investigated in both regions: pH, cation exchange capacity (CEC), five texture fractions (clay, coarse silt, fine silt, coarse sand and fine sand), iron, calcium carbonate and soil organic carbon (SOC) in the tilled horizon. Predictive abilities were studied according to R_cv2 and ratio of performance to deviation (RPD). Intermediate to near intermediate performances of prediction (R_cv2 and RPD between 0.28-0.70 and 1.19-1.85 respectively) were obtained for 6 topsoil properties: clay, iron, SOC, CEC, pH, coarse silt. In the Versailles Plain, 5 out of these properties could be predicted (by decreasing performance, CEC, SOC, pH, clay, coarse silt), while there were 4 predictable properties for La Peyne catchment (Iron, clay, CEC, coarse silt). The amount in coarse fragment content appeared to impact prediction error for iron content over La Peyne, while it influenced prediction error for SOC content over the Versailles Plain along with calcium carbonate content. A spatial structure of the estimated soil properties for bare soils pixels was highlighted, which promises further improvements in spatial prediction models for these properties. This work was carried out in the framework of both the TOSCA-CES "Cartographie Numérique des sols" and the PLEIADES-CO projects of the French Space Agency (CNES).
Improved integrated sniper location system
NASA Astrophysics Data System (ADS)
Figler, Burton D.; Spera, Timothy J.
1999-01-01
In July of 1995, Lockheed Martin IR Imaging Systems, of Lexington, Massachusetts began the development of an integrated sniper location system for the Defense Advanced Research Projects Agency and for the Department of the Navy's Naval Command Control & Ocean Surveillance Center, RDTE Division in San Diego, California. The I-SLS integrates acoustic and uncooled infrared sensing technologies to provide an affordable and highly effective sniper detection and location capability. This system, its performance and results from field tests at Camp Pendleton, California, in October 1996 were described in a paper presented at the November 1996 SPIE Photonics East Symposium1 on Enabling Technologies for Law Enforcement and Security. The I-SLS combines an acoustic warning system with an uncooled infrared warning system. The acoustic warning system has been developed by SenTech, Inc., of Lexington, Massachusetts. This acoustic warning system provides sniper detection and coarse location information based upon the muzzle blast of the sniper's weapon and/or upon the shock wave produced by the sniper's bullet, if the bullet is supersonic. The uncooled infrared warning system provides sniper detection and fine location information based upon the weapon's muzzle flash. In addition, the uncooled infrared warning system can provide thermal imagery that can be used to accurately locate and identify the sniper. Combining these two technologies improves detection probability, reduces false alarm rate and increases utility. In the two years since the last report of the integrated sniper location system, improvements have been made and a second field demonstration was planned. In this paper, we describe the integrated sniper location system modifications in preparation for the new field demonstration. In addition, fundamental improvements in the uncooled infrared sensor technology continue to be made. These improvements include higher sensitivity (lower minimum resolvable temperature), higher spatial resolution, and smaller size. This paper will describe the implementation and status of these improvements.
An implicit divalent counterion force field for RNA molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henke, Paul S.; Mak, Chi H., E-mail: cmak@usc.edu; Center of Applied Mathematical Sciences, University of Southern California, Los Angeles, California 90089
How to properly account for polyvalent counterions in a molecular dynamics simulation of polyelectrolytes such as nucleic acids remains an open question. Not only do counterions such as Mg{sup 2+} screen electrostatic interactions, they also produce attractive intrachain interactions that stabilize secondary and tertiary structures. Here, we show how a simple force field derived from a recently reported implicit counterion model can be integrated into a molecular dynamics simulation for RNAs to realistically reproduce key structural details of both single-stranded and base-paired RNA constructs. This divalent counterion model is computationally efficient. It works with existing atomistic force fields, or coarse-grainedmore » models may be tuned to work with it. We provide optimized parameters for a coarse-grained RNA model that takes advantage of this new counterion force field. Using the new model, we illustrate how the structural flexibility of RNA two-way junctions is modified under different salt conditions.« less
Monotonic entropy growth for a nonlinear model of random exchanges.
Apenko, S M
2013-02-01
We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific "coarse graining" of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.
NASA Astrophysics Data System (ADS)
Ziegler, Hannes Moritz
Planners and managers often rely on coarse population distribution data from the census for addressing various social, economic, and environmental problems. In the analysis of physical vulnerabilities to sea-level rise, census units such as blocks or block groups are coarse relative to the required decision-making application. This study explores the benefits offered from integrating image classification and dasymetric mapping at the household level to provide detailed small area population estimates at the scale of residential buildings. In a case study of Boca Raton, FL, a sea-level rise inundation grid based on mapping methods by NOAA is overlaid on the highly detailed population distribution data to identify vulnerable residences and estimate population displacement. The enhanced spatial detail offered through this method has the potential to better guide targeted strategies for future development, mitigation, and adaptation efforts.
A combined emitter threat assessment method based on ICW-RCM
NASA Astrophysics Data System (ADS)
Zhang, Ying; Wang, Hongwei; Guo, Xiaotao; Wang, Yubing
2017-08-01
Considering that the tradition al emitter threat assessment methods are difficult to intuitively reflect the degree of target threaten and the deficiency of real-time and complexity, on the basis of radar chart method(RCM), an algorithm of emitter combined threat assessment based on ICW-RCM (improved combination weighting method, ICW) is proposed. The coarse sorting is integrated with fine sorting in emitter combined threat assessment, sequencing the emitter threat level roughly accordance to radar operation mode, and reducing task priority of the low-threat emitter; On the basis of ICW-RCM, sequencing the same radar operation mode emitter roughly, finally, obtain the results of emitter threat assessment through coarse and fine sorting. Simulation analyses show the correctness and effectiveness of this algorithm. Comparing with classical method of emitter threat assessment based on CW-RCM, the algorithm is visual in image and can work quickly with lower complexity.
Monotonic entropy growth for a nonlinear model of random exchanges
NASA Astrophysics Data System (ADS)
Apenko, S. M.
2013-02-01
We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific “coarse graining” of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.
Simulating the flow of entangled polymers.
Masubuchi, Yuichi
2014-01-01
To optimize automation for polymer processing, attempts have been made to simulate the flow of entangled polymers. In industry, fluid dynamics simulations with phenomenological constitutive equations have been practically established. However, to account for molecular characteristics, a method to obtain the constitutive relationship from the molecular structure is required. Molecular dynamics simulations with atomic description are not practical for this purpose; accordingly, coarse-grained models with reduced degrees of freedom have been developed. Although the modeling of entanglement is still a challenge, mesoscopic models with a priori settings to reproduce entangled polymer dynamics, such as tube models, have achieved remarkable success. To use the mesoscopic models as staging posts between atomistic and fluid dynamics simulations, studies have been undertaken to establish links from the coarse-grained model to the atomistic and macroscopic simulations. Consequently, integrated simulations from materials chemistry to predict the macroscopic flow in polymer processing are forthcoming.
Consistent integration of experimental and ab initio data into molecular and coarse-grained models
NASA Astrophysics Data System (ADS)
Vlcek, Lukas
As computer simulations are increasingly used to complement or replace experiments, highly accurate descriptions of physical systems at different time and length scales are required to achieve realistic predictions. The questions of how to objectively measure model quality in relation to reference experimental or ab initio data, and how to transition seamlessly between different levels of resolution are therefore of prime interest. To address these issues, we use the concept of statistical distance to define a measure of similarity between statistical mechanical systems, i.e., a model and its target, and show that its minimization leads to general convergence of the systems' measurable properties. Through systematic coarse-graining, we arrive at appropriate expressions for optimization loss functions consistently incorporating microscopic ab initio data as well as macroscopic experimental data. The design of coarse-grained and multiscale models is then based on factoring the model system partition function into terms describing the system at different resolution levels. The optimization algorithm takes advantage of thermodynamic perturbation expressions for fast exploration of the model parameter space, enabling us to scan millions of parameter combinations per hour on a single CPU. The robustness and generality of the new model optimization framework and its efficient implementation are illustrated on selected examples including aqueous solutions, magnetic systems, and metal alloys.
Fischer, Nicholas G.; Wong, Jeffrey; Cerutis, D. Roselyn
2017-01-01
Mucosal seal formation around dental abutments is critical to the successful integration of dental implants into the human oral cavity. No information exists for how clinically relevant polishing procedures for computer-aided design and computer-aided manufactured (CAD/CAM) zirconia abutments affects cellular responses important to mucosal seal formation. CAD/CAM zirconia was divided into four groups for clinically relevant polishing utilizing commercial polishing heads: control, coarse, coarse plus medium, and coarse plus medium plus fine. Surfaces were analyzed with scanning electron microscopy (SEM), atomic force microscopy (AFM), and optical profilometry (OP). Subsequently, human gingival fibroblasts (HGFs) were seeded onto the zirconia surfaces. Proliferation was measured via a quantitative SEM technique and focal adhesion kinase (FAK) phosphorylation status was measured by an enzyme-linked immunosorbent assay (ELISA). Results showed an increase in proliferation on all polished surfaces as compared to the control. Phosphorylation of FAK at tyrosine 397 (Y397) was up-modulated on the control surfaces. The associated cell adaptation is discussed. In all cases, FAK phosphorylation was greater at 24 h than 48 h. These results suggest that clinicians should be mindful of the effects of abutment polishing methodology, as this may have an impact on early mucosal seal formation. PMID:29186907
Modeling of Turbulent Natural Convection in Enclosed Tall Cavities
NASA Astrophysics Data System (ADS)
Goloviznin, V. M.; Korotkin, I. A.; Finogenov, S. A.
2017-12-01
It was shown in our previous work (J. Appl. Mech. Tech. Phys 57 (7), 1159-1171 (2016)) that the eddy-resolving parameter-free CABARET scheme as applied to two-and three-dimensional de Vahl Davis benchmark tests (thermal convection in a square cavity) yields numerical results on coarse (20 × 20 and 20 × 20 × 20) grids that agree surprisingly well with experimental data and highly accurate computations for Rayleigh numbers of up to 1014. In the present paper, the sensitivity of this phenomenon to the cavity shape (varying from cubical to highly elongated) is analyzed. Box-shaped computational domains with aspect ratios of 1: 4, 1: 10, and 1: 28.6 are considered. The results produced by the CABARET scheme are compared with experimental data (aspect ratio of 1: 28.6), DNS results (aspect ratio of 1: 4), and an empirical formula (aspect ratio of 1: 10). In all the cases, the CABARET-based integral parameters of the cavity flow agree well with the other authors' results. Notably coarse grids with mesh refinement toward the walls are used in the CABARET calculations. It is shown that acceptable numerical accuracy on extremely coarse grids is achieved for an aspect ratio of up to 1: 10. For higher aspect ratios, the number of grid cells required for achieving prescribed accuracy grows significantly.
Jamroz, Michal; Orozco, Modesto; Kolinski, Andrzej; Kmiecik, Sebastian
2013-01-08
It is widely recognized that atomistic Molecular Dynamics (MD), a classical simulation method, captures the essential physics of protein dynamics. That idea is supported by a theoretical study showing that various MD force-fields provide a consensus picture of protein fluctuations in aqueous solution [Rueda, M. et al. Proc. Natl. Acad. Sci. U.S.A. 2007, 104, 796-801]. However, atomistic MD cannot be applied to most biologically relevant processes due to its limitation to relatively short time scales. Much longer time scales can be accessed by properly designed coarse-grained models. We demonstrate that the aforementioned consensus view of protein dynamics from short (nanosecond) time scale MD simulations is fairly consistent with the dynamics of the coarse-grained protein model - the CABS model. The CABS model employs stochastic dynamics (a Monte Carlo method) and a knowledge-based force-field, which is not biased toward the native structure of a simulated protein. Since CABS-based dynamics allows for the simulation of entire folding (or multiple folding events) in a single run, integration of the CABS approach with all-atom MD promises a convenient (and computationally feasible) means for the long-time multiscale molecular modeling of protein systems with atomistic resolution.
Weak Galilean invariance as a selection principle for coarse-grained diffusive models.
Cairoli, Andrea; Klages, Rainer; Baule, Adrian
2018-05-29
How does the mathematical description of a system change in different reference frames? Galilei first addressed this fundamental question by formulating the famous principle of Galilean invariance. It prescribes that the equations of motion of closed systems remain the same in different inertial frames related by Galilean transformations, thus imposing strong constraints on the dynamical rules. However, real world systems are often described by coarse-grained models integrating complex internal and external interactions indistinguishably as friction and stochastic forces. Since Galilean invariance is then violated, there is seemingly no alternative principle to assess a priori the physical consistency of a given stochastic model in different inertial frames. Here, starting from the Kac-Zwanzig Hamiltonian model generating Brownian motion, we show how Galilean invariance is broken during the coarse-graining procedure when deriving stochastic equations. Our analysis leads to a set of rules characterizing systems in different inertial frames that have to be satisfied by general stochastic models, which we call "weak Galilean invariance." Several well-known stochastic processes are invariant in these terms, except the continuous-time random walk for which we derive the correct invariant description. Our results are particularly relevant for the modeling of biological systems, as they provide a theoretical principle to select physically consistent stochastic models before a validation against experimental data.
Peculiar Traits of Coarse AP (Briefing Charts)
2014-12-01
coarse AP Bircumshaw, Newman Active centers are sources of AP decomposition gases AP low temperature decomposition (LTD) Most unstable AP particles ...delay before coarse AP ejection *Coarse AP particle flame retardancy 19 Air Force Research Laboratory Distribution A: Approved for public release...distribution unlimited. PA clearance #. Combustion bomb trials 2 AP phase change may enable coarse particle breakage Fractured coarse AP ejection agrees
NASA Astrophysics Data System (ADS)
Snover, A. K.; Littell, J. S.; Mantua, N. J.; Salathe, E. P.; Hamlet, A. F.; McGuire Elsner, M.; Tohver, I.; Lee, S.
2010-12-01
Assessing and planning for the impacts of climate change require regionally-specific information. Information is required not only about projected changes in climate but also the resultant changes in natural and human systems at the temporal and spatial scales of management and decision making. Therefore, climate impacts assessment typically results in a series of analyses, in which relatively coarse-resolution global climate model projections of changes in regional climate are downscaled to provide appropriate input to local impacts models. This talk will describe recent examples in which coarse-resolution (~150 to 300km) GCM output was “translated” into information requested by decision makers at relatively small (watershed) and large (multi-state) scales using regional climate modeling, statistical downscaling, hydrologic modeling, and sector-specific impacts modeling. Projected changes in local air temperature, precipitation, streamflow, and stream temperature were developed to support Seattle City Light’s assessment of climate change impacts on hydroelectric operations, future electricity load, and resident fish populations. A state-wide assessment of climate impacts on eight sectors (agriculture, coasts, energy, forests, human health, hydrology and water resources, salmon, and urban stormwater infrastructure) was developed for Washington State to aid adaptation planning. Hydro-climate change scenarios for approximately 300 streamflow locations in the Columbia River basin and selected coastal drainages west of the Cascades were developed in partnership with major water management agencies in the Pacific Northwest to allow planners to consider how hydrologic changes may affect management objectives. Treatment of uncertainty in these assessments included: using “bracketing” scenarios to describe a range of impacts, using ensemble averages to characterize the central estimate of future conditions (given an emissions scenario), and explicitly assessing the impacts of multiple GCM ensemble members. The implications of various approaches to dealing with uncertainty, such as these, must be carefully communicated to decision makers in order for projected climate impacts to be viewed as credible and used appropriately.
A parallel time integrator for noisy nonlinear oscillatory systems
NASA Astrophysics Data System (ADS)
Subber, Waad; Sarkar, Abhijit
2018-06-01
In this paper, we adapt a parallel time integration scheme to track the trajectories of noisy non-linear dynamical systems. Specifically, we formulate a parallel algorithm to generate the sample path of nonlinear oscillator defined by stochastic differential equations (SDEs) using the so-called parareal method for ordinary differential equations (ODEs). The presence of Wiener process in SDEs causes difficulties in the direct application of any numerical integration techniques of ODEs including the parareal algorithm. The parallel implementation of the algorithm involves two SDEs solvers, namely a fine-level scheme to integrate the system in parallel and a coarse-level scheme to generate and correct the required initial conditions to start the fine-level integrators. For the numerical illustration, a randomly excited Duffing oscillator is investigated in order to study the performance of the stochastic parallel algorithm with respect to a range of system parameters. The distributed implementation of the algorithm exploits Massage Passing Interface (MPI).
Assessing soil ecosystem services using empirical indicators
NASA Astrophysics Data System (ADS)
Bodí, Merche B.; Struyf, Eric; Staes, Jan; Meire, Patrick
2014-05-01
Studying the soil from the ecosystem services (ES) approach is a way to embrace the complexity and multiple functions of the soil systems and its interactions with the environment and with humans. The ES approach is ideal for developing a sustainable and integrated land management and to concern people about the value of conserving soil. However, this approach is generally used up to know only for soil provisioning services as well as the potential for carbon storage, but not for other services such as soil erosion or water buffering. In addition, those studies carried out are focussed in coarse spatial scale, without identifying the spatial or temporal variability. One of the reasons of this bias arises from the difficulties of obtaining a broad and reliable dataset of indicators from empirical sources. This constrain is sorted out with the action of SOGLO project (the Soil System Under Global Change), an interuniversity attraction pole project (2012-2017) involving different universities from Belgium. The project brings the opportunity to obtain a unique soil dataset for an improved and integrated analysis of the feedbacks between the soil system and fluxes of sediment, carbon (C), nutrients and water in response to anthropogenic forcings at different spatial and temporal scales in experimental sites in both Brazil and in Belgium. Within this broad project, the objective of the present work is to elucidate how different land uses in Belgium (forest, grassland, cropland with conventional tillage and with reduced tillage both with crop rotation) affect the delivery and trade-off of soil ecosystem services. We did this by measuring and comparing a range of indicators of soil ecosystem services in different lands uses during a range of 5 years. Specifically we investigated quantity of SOC in the soil and DOC in the soil solution and at the discharge point (SOC storage service/water buffering services); Si, N, P in the soil, dissolved in the soil solution and at the discharge point (regulating of P, N, Si cycles/ water buffering services); infiltration capacity, water retention curves and soil erosion (soil stability/water buffering services) and vegetation cover (biomass production service). We then examined the relationships and trade off between services spatially and seasonally. The results will be given during at the conference session but our hypothesis is that the performance of soil services is related even seasonally, and the degradation of one service enhances de degradation of the others.
Integration and Testing of the Lunar Reconnaissance Orbiter Attitude Control System
NASA Technical Reports Server (NTRS)
Simpson, Jim; Badgley, Jason; McCaughey, Ken; Brown, Kristen; Calhoun, Philip; Davis, Edward; Garrick, Joseph; Gill, Nathaniel; Hsu, Oscar; Jones, Noble;
2010-01-01
Throughout the Lunar Reconnaissance Orbiter (LRO) Integration and Testing (I&T) phase of the project, the Attitude Control System (ACS) team completed numerous tests on each hardware component in ever more flight like environments. The ACS utilizes a select group of attitude sensors and actuators. This paper chronicles the evolutionary steps taken to verify each component was constantly ready for flight as well as providing invaluable trending experience with the actual hardware. The paper includes a discussion of each ACS hardware component, lessons learned of the various stages of I&T, a discussion of the challenges that are unique to the LRO project, as well as a discussion of work for future missions to consider as part of their I&T plan. LRO ACS sensors were carefully installed, tested, and maintained over the 18 month I&T and prelaunch timeline. Care was taken with the optics of the Adcole Coarse Sun Sensors (CSS) to ensure their critical role in the Safe Hold mode was fulfilled. The use of new CSS stimulators provided the means of testing each CSS sensor independently, in ambient and vacuum conditions as well as over a wide range of thermal temperatures. Extreme bright light sources were also used to test the CSS in ambient conditions. The integration of the two SELEX Galileo Star Trackers was carefully planned and executed. Optical ground support equipment was designed and used often to check the performance of the star trackers throughout I&T in ambient and thermal/vacuum conditions. A late discovery of potential contamination of the star tracker light shades is discussed in this paper. This paper reviews how each time the spacecraft was at a new location and orientation, the Honeywell Miniature Inertial Measurement Unit (MIMU) was checked for data output validity. This gyro compassing test was performed at several key testing points in the timeline as well as several times while LRO was on the launch pad. Sensor alignment tests were completed several times to ensure that hardware remained on a rigid platform.
NASA Technical Reports Server (NTRS)
Chaikovsky, A.; Dubovik, O.; Holben, Brent N.; Bril, A.; Goloub, P.; Tanre, D.; Pappalardo, G.; Wandinger, U.; Chaikovskaya, L.; Denisov, S.;
2015-01-01
This paper presents a detailed description of LIRIC (LIdar-Radiometer Inversion Code)algorithm for simultaneous processing of coincident lidar and radiometric (sun photometric) observations for the retrieval of the aerosol concentration vertical profiles. As the lidar radiometric input data we use measurements from European Aerosol Re-search Lidar Network (EARLINET) lidars and collocated sun-photometers of Aerosol Robotic Network (AERONET). The LIRIC data processing provides sequential inversion of the combined lidar and radiometric data by the estimations of column-integrated aerosol parameters from radiometric measurements followed by the retrieval of height-dependent concentrations of fine and coarse aerosols from lidar signals using integrated column characteristics of aerosol layer as a priori constraints. The use of polarized lidar observations allows us to discriminate between spherical and non-spherical particles of the coarse aerosol mode. The LIRIC software package was implemented and tested at a number of EARLINET stations. Inter-comparison of the LIRIC-based aerosol retrievals was performed for the observations by seven EARLNET lidars in Leipzig, Germany on 25 May 2009. We found close agreement between the aerosol parameters derived from different lidars that supports high robustness of the LIRIC algorithm. The sensitivity of the retrieval results to the possible reduction of the available observation data is also discussed.
Atmospheric Rivers in VR-CESM: Historical Comparison and Future Projections
NASA Astrophysics Data System (ADS)
McClenny, E. E.; Ullrich, P. A.
2016-12-01
Atmospheric rivers (ARs) are responsible for most of the horizontal vapor transport from the tropics, and bring upwards of half the annual precipitation to midlatitude west coasts. The difference between a drought year and a wet year can come down to 1-2 ARs. Such few events transform an otherwise arid region into one which supports remarkable biodiversity, productive agriculture, and booming human populations. It follows that such a sensitive hydroclimate feature would demand priority in evaluating end-of-century climate runs, and indeed, the AR subfield has grown significantly over the last decade. However, results tend to vary wildly from study to study, raising questions about how to best approach ARs in models. The disparity may result from any number of issues, including the ability for a model to properly resolve a precipitating AR, to the formulation and application of an AR detection algorithm. ARs pose a unique problem in global climate models (GCMs) computationally and physically, because the GCM horizontal grid must be fine enough to resolve coastal mountain range topography and force orographic precipitation. Thus far, most end-of-century projections on ARs have been performed on models whose grids are too coarse to resolve mountain ranges, causing authors to draw conclusions on AR intensity from water vapor content or transport alone. The use of localized grid refinement in the Variable Resolution version of NCAR's Community Earth System Model (VR-CESM) has succeeded in resolving AR landfall. This study applies an integrated water vapor AR detection algorithm to historical and future projections from VR-CESM, with historical ARs validated against NASA's Modern Era Retrospective-Analysis for Research and Applications. Results on end-of-century precipitating AR frequency, intensity, and landfall location will be discussed.
NASA Technical Reports Server (NTRS)
Aiken, Alexander
2001-01-01
The Scalable Analysis Toolkit (SAT) project aimed to demonstrate that it is feasible and useful to statically detect software bugs in very large systems. The technical focus of the project was on a relatively new class of constraint-based techniques for analysis software, where the desired facts about programs (e.g., the presence of a particular bug) are phrased as constraint problems to be solved. At the beginning of this project, the most successful forms of formal software analysis were limited forms of automatic theorem proving (as exemplified by the analyses used in language type systems and optimizing compilers), semi-automatic theorem proving for full verification, and model checking. With a few notable exceptions these approaches had not been demonstrated to scale to software systems of even 50,000 lines of code. Realistic approaches to large-scale software analysis cannot hope to make every conceivable formal method scale. Thus, the SAT approach is to mix different methods in one application by using coarse and fast but still adequate methods at the largest scales, and reserving the use of more precise but also more expensive methods at smaller scales for critical aspects (that is, aspects critical to the analysis problem under consideration) of a software system. The principled method proposed for combining a heterogeneous collection of formal systems with different scalability characteristics is mixed constraints. This idea had been used previously in small-scale applications with encouraging results: using mostly coarse methods and narrowly targeted precise methods, useful information (meaning the discovery of bugs in real programs) was obtained with excellent scalability.
NASA Technical Reports Server (NTRS)
Chen, Zhangxin; Ewing, Richard E.
1996-01-01
Multigrid algorithms for nonconforming and mixed finite element methods for second order elliptic problems on triangular and rectangular finite elements are considered. The construction of several coarse-to-fine intergrid transfer operators for nonconforming multigrid algorithms is discussed. The equivalence between the nonconforming and mixed finite element methods with and without projection of the coefficient of the differential problems into finite element spaces is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunert, Sebastian; Wang, Yaqi; Gleicher, Frederick
This paper presents a flexible nonlinear diffusion acceleration (NDA) method that discretizes both the S N transport equation and the diffusion equation using the discontinuous finite element method (DFEM). The method is flexible in that the diffusion equation can be discretized on a coarser mesh with the only restriction that it is nested within the transport mesh and the FEM shape function orders of the two equations can be different. The consistency of the transport and diffusion solutions at convergence is defined by using a projection operator mapping the transport into the diffusion FEM space. The diffusion weak form ismore » based on the modified incomplete interior penalty (MIP) diffusion DFEM discretization that is extended by volumetric drift, interior face, and boundary closure terms. In contrast to commonly used coarse mesh finite difference (CMFD) methods, the presented NDA method uses a full FEM discretized diffusion equation for acceleration. Suitable projection and prolongation operators arise naturally from the FEM framework. Via Fourier analysis and numerical experiments for a one-group, fixed source problem the following properties of the NDA method are established for structured quadrilateral meshes: (1) the presented method is unconditionally stable and effective in the presence of mild material heterogeneities if the same mesh and identical shape functions either of the bilinear or biquadratic type are used, (2) the NDA method remains unconditionally stable in the presence of strong heterogeneities, (3) the NDA method with bilinear elements extends the range of effectiveness and stability by a factor of two when compared to CMFD if a coarser diffusion mesh is selected. In addition, the method is tested for solving the C5G7 multigroup, eigenvalue problem using coarse and fine mesh acceleration. Finally, while NDA does not offer an advantage over CMFD for fine mesh acceleration, it reduces the iteration count required for convergence by almost a factor of two in the case of coarse mesh acceleration.« less
Schunert, Sebastian; Wang, Yaqi; Gleicher, Frederick; ...
2017-02-21
This paper presents a flexible nonlinear diffusion acceleration (NDA) method that discretizes both the S N transport equation and the diffusion equation using the discontinuous finite element method (DFEM). The method is flexible in that the diffusion equation can be discretized on a coarser mesh with the only restriction that it is nested within the transport mesh and the FEM shape function orders of the two equations can be different. The consistency of the transport and diffusion solutions at convergence is defined by using a projection operator mapping the transport into the diffusion FEM space. The diffusion weak form ismore » based on the modified incomplete interior penalty (MIP) diffusion DFEM discretization that is extended by volumetric drift, interior face, and boundary closure terms. In contrast to commonly used coarse mesh finite difference (CMFD) methods, the presented NDA method uses a full FEM discretized diffusion equation for acceleration. Suitable projection and prolongation operators arise naturally from the FEM framework. Via Fourier analysis and numerical experiments for a one-group, fixed source problem the following properties of the NDA method are established for structured quadrilateral meshes: (1) the presented method is unconditionally stable and effective in the presence of mild material heterogeneities if the same mesh and identical shape functions either of the bilinear or biquadratic type are used, (2) the NDA method remains unconditionally stable in the presence of strong heterogeneities, (3) the NDA method with bilinear elements extends the range of effectiveness and stability by a factor of two when compared to CMFD if a coarser diffusion mesh is selected. In addition, the method is tested for solving the C5G7 multigroup, eigenvalue problem using coarse and fine mesh acceleration. Finally, while NDA does not offer an advantage over CMFD for fine mesh acceleration, it reduces the iteration count required for convergence by almost a factor of two in the case of coarse mesh acceleration.« less
NASA Astrophysics Data System (ADS)
Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin
2017-12-01
The present paper deals with the numerical solution of the incompressible Navier-Stokes equations using high-order discontinuous Galerkin (DG) methods for discretization in space. For DG methods applied to the dual splitting projection method, instabilities have recently been reported that occur for small time step sizes. Since the critical time step size depends on the viscosity and the spatial resolution, these instabilities limit the robustness of the Navier-Stokes solver in case of complex engineering applications characterized by coarse spatial resolutions and small viscosities. By means of numerical investigation we give evidence that these instabilities are related to the discontinuous Galerkin formulation of the velocity divergence term and the pressure gradient term that couple velocity and pressure. Integration by parts of these terms with a suitable definition of boundary conditions is required in order to obtain a stable and robust method. Since the intermediate velocity field does not fulfill the boundary conditions prescribed for the velocity, a consistent boundary condition is derived from the convective step of the dual splitting scheme to ensure high-order accuracy with respect to the temporal discretization. This new formulation is stable in the limit of small time steps for both equal-order and mixed-order polynomial approximations. Although the dual splitting scheme itself includes inf-sup stabilizing contributions, we demonstrate that spurious pressure oscillations appear for equal-order polynomials and small time steps highlighting the necessity to consider inf-sup stability explicitly.
NASA Technical Reports Server (NTRS)
Leptoukh, Gregory
2005-01-01
The presentation describes data management of NASA remote sensing data for Northern Eurasia Earth Science Partnership Initiative (NEESPI). Many types of ground and integrative (e.g., satellite, GIs) data will be needed and many models must be applied, adapted or developed for properly understanding the functioning of Northern Eurasia cold and diverse regional system. Mechanisms for obtaining the requisite data sets and models and sharing them among the participating scientists are essential. The proposed project targets integration of remote sensing data from AVHRR, MODIS, and other NASA instruments on board US- satellites (with potential expansion to data from non-US satellites), customized data products from climatology data sets (e.g., ISCCP, ISLSCP) and model data (e.g., NCEPNCAR) into a single, well-architected data management system. It will utilize two existing components developed by the Goddard Earth Sciences Data & Information Services Center (GES DISC) at the NASA Goddard Space Flight Center: (1) online archiving and distribution system, that allows collection, processing and ingest of data from various sources into the online archive, and (2) user-friendly intelligent web-based online visualization and analysis system, also known as Giovanni. The former includes various kinds of data preparation for seamless interoperability between measurements by different instruments. The latter provides convenient access to various geophysical parameters measured in the Northern Eurasia region without any need to learn complicated remote sensing data formats, or retrieve and process large volumes of NASA data. Initial implementation of this data management system will concentrate on atmospheric data and surface data aggregated to coarse resolution to support collaborative environment and climate change studies and modeling, while at later stages, data from NASA and non-NASA satellites at higher resolution will be integrated into the system.
Path statistics, memory, and coarse-graining of continuous-time random walks on networks
Kion-Crosby, Willow; Morozov, Alexandre V.
2015-01-01
Continuous-time random walks (CTRWs) on discrete state spaces, ranging from regular lattices to complex networks, are ubiquitous across physics, chemistry, and biology. Models with coarse-grained states (for example, those employed in studies of molecular kinetics) or spatial disorder can give rise to memory and non-exponential distributions of waiting times and first-passage statistics. However, existing methods for analyzing CTRWs on complex energy landscapes do not address these effects. Here we use statistical mechanics of the nonequilibrium path ensemble to characterize first-passage CTRWs on networks with arbitrary connectivity, energy landscape, and waiting time distributions. Our approach can be applied to calculating higher moments (beyond the mean) of path length, time, and action, as well as statistics of any conservative or non-conservative force along a path. For homogeneous networks, we derive exact relations between length and time moments, quantifying the validity of approximating a continuous-time process with its discrete-time projection. For more general models, we obtain recursion relations, reminiscent of transfer matrix and exact enumeration techniques, to efficiently calculate path statistics numerically. We have implemented our algorithm in PathMAN (Path Matrix Algorithm for Networks), a Python script that users can apply to their model of choice. We demonstrate the algorithm on a few representative examples which underscore the importance of non-exponential distributions, memory, and coarse-graining in CTRWs. PMID:26646868
Cortical feedback signals generalise across different spatial frequencies of feedforward inputs.
Revina, Yulia; Petro, Lucy S; Muckli, Lars
2017-09-22
Visual processing in cortex relies on feedback projections contextualising feedforward information flow. Primary visual cortex (V1) has small receptive fields and processes feedforward information at a fine-grained spatial scale, whereas higher visual areas have larger, spatially invariant receptive fields. Therefore, feedback could provide coarse information about the global scene structure or alternatively recover fine-grained structure by targeting small receptive fields in V1. We tested if feedback signals generalise across different spatial frequencies of feedforward inputs, or if they are tuned to the spatial scale of the visual scene. Using a partial occlusion paradigm, functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) we investigated whether feedback to V1 contains coarse or fine-grained information by manipulating the spatial frequency of the scene surround outside an occluded image portion. We show that feedback transmits both coarse and fine-grained information as it carries information about both low (LSF) and high spatial frequencies (HSF). Further, feedback signals containing LSF information are similar to feedback signals containing HSF information, even without a large overlap in spatial frequency bands of the HSF and LSF scenes. Lastly, we found that feedback carries similar information about the spatial frequency band across different scenes. We conclude that cortical feedback signals contain information which generalises across different spatial frequencies of feedforward inputs. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Salma, Imre; Maenhaut, Willy; Zemplén-Papp, Éva; Záray, Gyula
As part of an air pollution project in Budapest, aerosol samples were collected by stacked filter units and cascade impactors at an urban background site, two downtown sites, and within a road tunnel in field campaigns conducted in 1996, 1998 and 1999. Some criteria pollutants were also measured at one of the downtown sites. The aerosol samples were analysed by one or more of the following methods: instrumental neutron activation analysis, particle-induced X-ray emission analysis, a light reflection technique, gravimetry, thermal profiling carbon analysis and capillary electrophoresis. The quantities measured or derived include atmospheric concentrations of elements (from Na to U), of particulate matter, of black and elemental carbon, and total carbonaceous fraction, of some ionic species (e.g., nitrate and sulphate) in the fine ( <2 μm equivalent aerodynamic diameter, EAD) or in both coarse (10- 2 μm EAD) and fine size fractions, atmospheric concentrations of NO, NO 2, SO 2, CO and total suspended particulate matter, and meteorological parameters. The analytical results were used for characterisation of the concentration levels, elemental composition, time trends, enrichment of and relationships among the aerosol species in coarse and fine size fractions, for studying their fine-to-coarse concentration ratios, spatial and temporal variability, for determining detailed elemental mass size distributions, and for examining the extent of chemical mass closure.
Challenges in Characterizing and Controlling Complex Cellular Systems
NASA Astrophysics Data System (ADS)
Wikswo, John
2011-03-01
Multicellular dynamic biological processes such as developmental differentiation, wound repair, disease, aging, and even homeostasis can be represented by trajectories through a phase space whose extent reflects the genetic, post-translational, and metabolic complexity of the process - easily extending to tens of thousands of dimensions. Intra- and inter-cellular sensing and regulatory systems and their nested, redundant, and non-linear feed-forward and feed-back controls create high-dimensioned attractors in this phase space. Metabolism provides free energy to drive non-equilibrium processes and dynamically reconfigure attractors. Studies of single molecules and cells provide only minimalist projections onto a small number of axes. It may be difficult to infer larger-scale emergent behavior from linearized experiments that perform only small amplitude perturbations on a limited number of the dimensions. Complete characterization may succeed for bounded component problems, such as an individual cell cycle or signaling cascade, but larger systems problems will require a coarse-grained approach. Hence a new experimental and analytical framework is needed. Possibly one could utilize high-amplitude, multi-variable driving of the system to infer coarse-grained, effective models, which in turn can be tested by their ability to control systems behavior. Navigation at will between attractors in a high-dimensioned dynamical system will provide not only detailed knowledge of the shape of attractor basins, but also measures of underlying stochastic events such as noise in gene expression or receptor binding and how both affect system stability and robustness. Needed for this are wide-bandwidth methods to sense and actuate large numbers of intracellular and extracellular variables and automatically and rapidly infer dynamic control models. The success of this approach may be determined by how broadly the sensors and actuators can span the full dimensionality of the phase space. Supported by the Defense Threat Reduction Agency HDTRA-09-1-0013, NIH National Institute on Drug Abuse RC2DA028981, the National Academies Keck Futures Initiative, and the Vanderbilt Institute for Integrative Biosystems Research and Education.
Xue, Zhigang; Hao, Jiming; Chai, Fahe; Duan, Ning; Chen, Yizhen; Li, Jindan; Chen, Fu; Liu, Simei; Pu, Wenqing
2005-12-01
This paper analyzes the air quality impacts of coal-fired power plants in the northern passageway of the West-East Power Transmission Project in China. A three-layer Lagrangian model called ATMOS, was used to simulate the spatial distribution of incremental sulfur dioxide (SO2) and coarse particulate matter (PM10) concentrations under different emission control scenarios. In the year 2005, the emissions from planned power plants mainly affected the air quality of Shanxi, Shaanxi, the common boundary of Inner Mongolia and Shanxi, and the area around the boundary between Inner Mongolia and Ningxia. In these areas, the annually averaged incremental SO2 and PM10 concentrations exceed 2 and 2.5 microg/m3, respectively. The maximum increases of the annually averaged SO2 and PM10 concentrations are 8.3 and 7.2 microg/m3, respectively, which occur around Hancheng city, near the boundary of the Shaanxi and Shanxi provinces. After integrated control measures are considered, the maximum increases of annually averaged SO2 and PM10 concentrations fall to 4.9 and 4 microg/m3, respectively. In the year 2010, the areas affected by planned power plants are mainly North Shaanxi, North Ningxia, and Northwest Shanxi. The maximum increases of the annually averaged SO2 and PM10, concentrations are, respectively, 6.3 and 5.6 microg/m3, occurring in Northwest Shanxi, which decline to 4.4 and 4.1 microg/m3 after the control measures are implemented. The results showed that the proposed power plants mainly affect the air quality of the region where the power plants are built, with little impact on East China where the electricity will be used. The influences of planned power plants on air quality will be decreased greatly by implementing integrated control measures.
Composition and Sources of Fine and Coarse Particles Collected during 2002–2010 in Boston, MA
Masri, Shahir; Kang, Choong-Min; Koutrakis, Petros
2016-01-01
Identifying the sources, composition, and temporal variability of fine (PM2.5) and coarse (PM2.5-10) particles is a crucial component in understanding PM toxicity and establishing proper PM regulations. In this study, a Harvard Impactor was used to collect daily integrated fine and coarse particle samples every third day for nine years at a single site in Boston, MA. A total of 1,960 filters were analyzed for elements, black carbon (BC), and total PM mass. Positive Matrix Factorization (PMF) was used to identify source types and quantify their contributions to ambient PM2.5 and PM2.5-10. BC and 17 elements were identified as the main constituents in our samples. Results showed that BC, S, and Pb were associated exclusively with the fine particle mode, while 84% of V and 79% of Ni were associated with this mode. Elements mostly found in the coarse mode, over 80%, included Ca, Mn (road dust), and Cl (sea salt). PMF identified six source types for PM2.5 and three source types for PM2.5-10. Source types for PM2.5 included regional pollution, motor vehicles, sea salt, crustal/road dust, oil combustion, and wood burning. Regional pollution contributed the most, accounting for 48% of total PM2.5 mass, followed by motor vehicles (21%) and wood burning (19%). Source types for PM2.5-10 included crustal/road dust (62%), motor vehicles (22%), and sea salt (16%). A linear decrease in PM concentrations with time was observed for both fine (−5.2%/yr) and coarse (−3.6%/yr) particles. The fine-mode trend was mostly related to oil combustion and regional pollution contributions. Average PM2.5 concentrations peaked in summer (10.4 μg/m3) while PM2.5-10 concentrations were lower and demonstrated little seasonal variability. The findings of this study show that PM25 is decreasing more sharply than PM2.5-10 over time. This suggests the increasing importance of PM2.5-10 and traffic-related sources for PM exposure and future policies. PMID:25947125
Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Jennings, Esther
2013-01-01
In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.
NASA Astrophysics Data System (ADS)
Girotto, M.; De Lannoy, G. J. M.; Reichle, R. H.; Rodell, M.
2015-12-01
The Gravity Recovery And Climate Experiment (GRACE) mission is unique because it provides highly accurate column integrated estimates of terrestrial water storage (TWS) variations. Major limitations of GRACE-based TWS observations are related to their monthly temporal and coarse spatial resolution (around 330 km at the equator), and to the vertical integration of the water storage components. These challenges can be addressed through data assimilation. To date, it is still not obvious how best to assimilate GRACE-TWS observations into a land surface model, in order to improve hydrological variables, and many details have yet to be worked out. This presentation discusses specific recent features of the assimilation of gridded GRACE-TWS data into the NASA Goddard Earth Observing System (GEOS-5) Catchment land surface model to improve soil moisture and shallow groundwater estimates at the continental scale. The major recent advancements introduced by the presented work with respect to earlier systems include: 1) the assimilation of gridded GRACE-TWS data product with scaling factors that are specifically derived for data assimilation purposes only; 2) the assimilation is performed through a 3D assimilation scheme, in which reasonable spatial and temporal error standard deviations and correlations are exploited; 3) the analysis step uses an optimized calculation and application of the analysis increments; 4) a poor-man's adaptive estimation of a spatially variable measurement error. This work shows that even if they are characterized by a coarse spatial and temporal resolution, the observed column integrated GRACE-TWS data have potential for improving our understanding of soil moisture and shallow groundwater variations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Justin, Charles
2009-04-01
Abstract -The majority of studies investigating the importance of coarse woody debris (CWD) to forest- floor vertebrates have taken place in the Pacific Northwest and southern Appalachian Mountains, while comparative studies in the southeastern Coastal Plain are lacking. My study was a continuation of a long-term project investigating the importance of CWD as a habitat component for shrew and herpetofaunal communities within managed pine stands in the southeastern Coastal Plain. Results suggest that addition of CWD can increase abundance of southeastern and southern short-tailed shrews. However, downed wood does not appear to be a critical habitat component for amphibians andmore » reptiles. Rising petroleum costs and advances in wood utilization technology have resulted in an emerging biofuels market with potential to decrease CWD volumes left in forests following timber harvests. Therefore, forest managers must understand the value of CWD as an ecosystem component to maintain economically productive forests while conserving biological diversity.« less
NASA Astrophysics Data System (ADS)
Pithan, Felix; Shepherd, Theodore G.; Zappa, Giuseppe; Sandu, Irina
2016-07-01
State-of-the art climate models generally struggle to represent important features of the large-scale circulation. Common model deficiencies include an equatorward bias in the location of the midlatitude westerlies and an overly zonal orientation of the North Atlantic storm track. Orography is known to strongly affect the atmospheric circulation and is notoriously difficult to represent in coarse-resolution climate models. Yet how the representation of orography affects circulation biases in current climate models is not understood. Here we show that the effects of switching off the parameterization of drag from low-level orographic blocking in one climate model resemble the biases of the Coupled Model Intercomparison Project Phase 5 ensemble: An overly zonal wintertime North Atlantic storm track and less European blocking events, and an equatorward shift in the Southern Hemispheric jet and increase in the Southern Annular Mode time scale. This suggests that typical circulation biases in coarse-resolution climate models may be alleviated by improved parameterizations of low-level drag.
Novel model of a AlGaN/GaN high electron mobility transistor based on an artificial neural network
NASA Astrophysics Data System (ADS)
Cheng, Zhi-Qun; Hu, Sha; Liu, Jun; Zhang, Qi-Jun
2011-03-01
In this paper we present a novel approach to modeling AlGaN/GaN high electron mobility transistor (HEMT) with an artificial neural network (ANN). The AlGaN/GaN HEMT device structure and its fabrication process are described. The circuit-based Neuro-space mapping (neuro-SM) technique is studied in detail. The EEHEMT model is implemented according to the measurement results of the designed device, which serves as a coarse model. An ANN is proposed to model AlGaN/GaN HEMT based on the coarse model. Its optimization is performed. The simulation results from the model are compared with the measurement results. It is shown that the simulation results obtained from the ANN model of AlGaN/GaN HEMT are more accurate than those obtained from the EEHEMT model. Project supported by the National Natural Science Foundation of China (Grant No. 60776052).
NASA Astrophysics Data System (ADS)
Limantara, A. D.; Widodo, A.; Winarto, S.; Krisnawati, L. D.; Mudjanarko, S. W.
2018-04-01
The use of natural gravel (rivers) as concrete mixtures is rarely encountered after days of demands for a higher strength of concrete. Moreover, today people have found High-Performance Concrete which, when viewed from the rough aggregate consisted mostly of broken stone, although the fine grain material still used natural sand. Is it possible that a mixture of concrete using natural gravel as a coarse aggregate is capable of producing concrete with compressive strength equivalent to a concrete mixture using crushed stone? To obtain information on this, a series of tests on concrete mixes with crude aggregates of Kalitelu Crusher, Gondang, Tulungagung and natural stone (river gravel) from the Brantas River, Ngujang, Tulungagung in the Materials Testing Laboratory Tugu Dam Construction Project, Kab. Trenggalek. From concrete strength test results using coarse material obtained value 19.47 Mpa, while the compressive strength of concrete with a mixture of crushed stone obtained the value of 21.12 Mpa.
Characterizing and contrasting instream and riparian coarse wood in western Montana basins
Michael K. Young; Ethan A. Mace; Eric T. Ziegler; Elaine K. Sutherland
2006-01-01
The importance of coarse wood to aquatic biota and stream channel structure is widely recognized, yet characterizations of large-scale patterns in coarse wood dimensions and loads are rare. To address these issues, we censused instream coarse wood ( 2 m long and 10 cm minimum diameter) and sampled riparian coarse wood and channel characteristics in and along 13 streams...
A novel capacitive absolute positioning sensor based on time grating with nanometer resolution
NASA Astrophysics Data System (ADS)
Pu, Hongji; Liu, Hongzhong; Liu, Xiaokang; Peng, Kai; Yu, Zhicheng
2018-05-01
The present work proposes a novel capacitive absolute positioning sensor based on time grating. The sensor includes a fine incremental-displacement measurement component combined with a coarse absolute-position measurement component to obtain high-resolution absolute positioning measurements. A single row type sensor was proposed to achieve fine displacement measurement, which combines the two electrode rows of a previously proposed double-row type capacitive displacement sensor based on time grating into a single row. To achieve absolute positioning measurement, the coarse measurement component is designed as a single-row type displacement sensor employing a single spatial period over the entire measurement range. In addition, this component employs a rectangular induction electrode and four groups of orthogonal discrete excitation electrodes with half-sinusoidal envelope shapes, which were formed by alternately extending the rectangular electrodes of the fine measurement component. The fine and coarse measurement components are tightly integrated to form a compact absolute positioning sensor. A prototype sensor was manufactured using printed circuit board technology for testing and optimization of the design in conjunction with simulations. Experimental results show that the prototype sensor achieves a ±300 nm measurement accuracy with a 1 nm resolution over a displacement range of 200 mm when employing error compensation. The proposed sensor is an excellent alternative to presently available long-range absolute nanometrology sensors owing to its low cost, simple structure, and ease of manufacturing.
Stochastic inflation in phase space: is slow roll a stochastic attractor?
NASA Astrophysics Data System (ADS)
Grain, Julien; Vennin, Vincent
2017-05-01
An appealing feature of inflationary cosmology is the presence of a phase-space attractor, ``slow roll'', which washes out the dependence on initial field velocities. We investigate the robustness of this property under backreaction from quantum fluctuations using the stochastic inflation formalism in the phase-space approach. A Hamiltonian formulation of stochastic inflation is presented, where it is shown that the coarse-graining procedure—where wavelengths smaller than the Hubble radius are integrated out—preserves the canonical structure of free fields. This means that different sets of canonical variables give rise to the same probability distribution which clarifies the literature with respect to this issue. The role played by the quantum-to-classical transition is also analysed and is shown to constrain the coarse-graining scale. In the case of free fields, we find that quantum diffusion is aligned in phase space with the slow-roll direction. This implies that the classical slow-roll attractor is immune to stochastic effects and thus generalises to a stochastic attractor regardless of initial conditions, with a relaxation time at least as short as in the classical system. For non-test fields or for test fields with non-linear self interactions however, quantum diffusion and the classical slow-roll flow are misaligned. We derive a condition on the coarse-graining scale so that observational corrections from this misalignment are negligible at leading order in slow roll.
Iris Image Classification Based on Hierarchical Visual Codebook.
Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang
2014-06-01
Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.
Ray Drapek; John B. Kim; Ronald P. Neilson
2015-01-01
Land managers need to include climate change in their decisionmaking, but the climate models that project future climates operate at spatial scales that are too coarse to be of direct use. To create a dataset more useful to managers, soil and historical climate were assembled for the United States and Canada at a 5-arcminute grid resolution. Nine CMIP3 future climate...
Hydrophobically stabilized open state for the lateral gate of the Sec translocon
Zhang, Bin; Miller, Thomas F.
2010-01-01
The Sec translocon is a central component of cellular pathways for protein translocation and membrane integration. Using both atomistic and coarse-grained molecular simulations, we investigate the conformational landscape of the translocon and explore the role of peptide substrates in the regulation of the translocation and integration pathways. Inclusion of a hydrophobic peptide substrate in the translocon stabilizes the opening of the lateral gate for membrane integration, whereas a hydrophilic peptide substrate favors the closed lateral gate conformation. The relative orientation of the plug moiety and a peptide substrate within the translocon channel is similarly dependent on whether the substrate is hydrophobic or hydrophilic in character, and the energetics of the translocon lateral gate opening in the presence of a peptide substrate is governed by the energetics of the peptide interface with the membrane. Implications of these results for the regulation of Sec-mediated pathways for protein translocation vs. membrane integration are discussed. PMID:20203009
Finite Dimensional Approximations for Continuum Multiscale Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berlyand, Leonid
2017-01-24
The completed research project concerns the development of novel computational techniques for modeling nonlinear multiscale physical and biological phenomena. Specifically, it addresses the theoretical development and applications of the homogenization theory (coarse graining) approach to calculation of the effective properties of highly heterogenous biological and bio-inspired materials with many spatial scales and nonlinear behavior. This theory studies properties of strongly heterogeneous media in problems arising in materials science, geoscience, biology, etc. Modeling of such media raises fundamental mathematical questions, primarily in partial differential equations (PDEs) and calculus of variations, the subject of the PI’s research. The focus of completed researchmore » was on mathematical models of biological and bio-inspired materials with the common theme of multiscale analysis and coarse grain computational techniques. Biological and bio-inspired materials offer the unique ability to create environmentally clean functional materials used for energy conversion and storage. These materials are intrinsically complex, with hierarchical organization occurring on many nested length and time scales. The potential to rationally design and tailor the properties of these materials for broad energy applications has been hampered by the lack of computational techniques, which are able to bridge from the molecular to the macroscopic scale. The project addressed the challenge of computational treatments of such complex materials by the development of a synergistic approach that combines innovative multiscale modeling/analysis techniques with high performance computing.« less
How coarse is too coarse for salmon spawning substrates?
NASA Astrophysics Data System (ADS)
Wooster, J. K.; Riebe, C. S.; Ligon, F. K.; Overstreet, B. T.
2009-12-01
Populations of Pacific salmon species have declined sharply in many rivers of the western US. Reversing these declines is a top priority and expense of many river restoration projects. To help restore salmon populations, managers often inject gravel into rivers, to supplement spawning habitat that has been depleted by gravel mining and the effects of dams—which block sediment and thus impair habitat downstream by coarsening the bed where salmon historically spawned. However, there is little quantitative understanding nor a methodology for determining when a river bed has become too coarse for salmon spawning. Hence there is little scientific basis for selecting sites that would optimize the restoration benefits of gravel injection (e.g., sites where flow velocities are suitable but bed materials are too coarse for spawning). To develop a quantitative understanding of what makes river beds too coarse for salmon spawning, we studied redds and spawning use in a series of California and Washington rivers where salmon spawning ability appears to be affected by coarse bed material. Our working hypothesis is that for a given flow condition, there is a maximum “threshold” particle size that a salmon of a given size is able to excavate and/or move as she builds her redd. A second, related hypothesis is that spawning use should decrease and eventually become impossible with increasing percent coverage by immovable particles. To test these hypotheses, we quantified the sizes and spatial distributions of immovably coarse particles in a series of salmon redds in each river during the peak of spawning. We also quantified spawning use and how it relates to percent coverage by immovable particles. Results from our studies of fall-run chinook salmon (Oncorhynchus tshawytsha) in the Feather River suggest that immovable particle size varies as a function of flow velocity over the redd, implying that faster water helps fish move bigger particles. Our Feather River study also suggests that the immovable particle size varies as a function of particle shape. Results from our study of fall run chinook salmon in the Sacramento River suggest that spawning is not possible when the bed is more than 40% covered by immovable particles, consistent with our second hypotheses. We will explore these relationships further in fall 2009, when we collect data on threshold particle sizes and spawning use for both pink salmon (O. gorbuscha) in the Puyallup River, and chinook salmon in the Trinity River. Because pink salmon are significantly smaller than chinook salmon, we expect that their redd building success is constrained by a lower average threshold particle size. We expect that there will be a range of threshold sizes for each run, depending on intra-run variability in fish size and variations in flow velocity. Taken together we expect that our results will demonstrate the feasibility of a new methodology for determining when a bed has become too coarse, thus contributing to more effective management of rivers where monitoring of spawning suitability of natural gravels is a priority.
Systematic and simulation-free coarse graining of homopolymer melts: a relative-entropy-based study.
Yang, Delian; Wang, Qiang
2015-09-28
We applied the systematic and simulation-free strategy proposed in our previous work (D. Yang and Q. Wang, J. Chem. Phys., 2015, 142, 054905) to the relative-entropy-based (RE-based) coarse graining of homopolymer melts. RE-based coarse graining provides a quantitative measure of the coarse-graining performance and can be used to select the appropriate analytic functional forms of the pair potentials between coarse-grained (CG) segments, which are more convenient to use than the tabulated (numerical) CG potentials obtained from structure-based coarse graining. In our general coarse-graining strategy for homopolymer melts using the RE framework proposed here, the bonding and non-bonded CG potentials are coupled and need to be solved simultaneously. Taking the hard-core Gaussian thread model (K. S. Schweizer and J. G. Curro, Chem. Phys., 1990, 149, 105) as the original system, we performed RE-based coarse graining using the polymer reference interaction site model theory under the assumption that the intrachain segment pair correlation functions of CG systems are the same as those in the original system, which de-couples the bonding and non-bonded CG potentials and simplifies our calculations (that is, we only calculated the latter). We compared the performance of various analytic functional forms of non-bonded CG pair potential and closures for CG systems in RE-based coarse graining, as well as the structural and thermodynamic properties of original and CG systems at various coarse-graining levels. Our results obtained from RE-based coarse graining are also compared with those from structure-based coarse graining.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaiswal, Priyank
The goal of this project was to determine structural and stratigraphic controls on hydrate occurrence and distribution in Green Canyon (GC) 955 and Walker Ridge (WR) 313 blocks using seismic and well data. Gas hydrate was discovered in these blocks in coarse- and fine-grained sediments during the 2009 Joint Industrial project (JIP) Leg 11 drilling expedition. Although the immediate interest of the exploration community is exclusively hydrate which is present in coarse–grained sediments, factors that control hydrate and free gas distribution in the two blocks and whether coarse and fine-grained hydrate-bearing units are related in any manner, formed the coremore » of this research. The project spanned from 10/01/2012 to 07/31/2016. In the project, in both the leased blocks, the interval spanning the gas hydrate stability zone (GHSZ) was characterized using a joint analysis of sparse Ocean Bottom Seismic (OBS) and dense, surface–towed multichannel seismic (MCS) data. The project team had the luxury of calibrating their results with two well logs. Advance processing methods such as depth migration and full-waveform inversion (FWI) were used for seismic data analysis. Hydrate quantification was achieved through interpretation of the FWI velocity field using appropriate rock physics models at both blocks. The seismic modeling/inversion methodology (common to both GC955 and WR313 blocks) was as follows. First, the MCS data were depth migrated using a P-wave velocity (VP) model constructed using inversion of reflection arrival times of a few (four in both cases) key horizons carefully picked in the OBS data to farthest possible offsets. Then, the resolution of the traveltime VP model was improved to wavelength scale by inverting OBS gathers up to the highest frequency possible (21.75 Hz for GC955 and 17.5 for WR313) using FWI. Finally, the hydrate saturation (or the volume fraction) was estimated at the well location assuming one of the other hydrate morphology (filling the primary or the secondary porosity) was extrapolated out from the wells using the FWI VP as a guide. General outcomes were as follows. First and foremost, an imaging methodology using sparse seismic data, which is easily replicable at other sites with similar datasets, has been demonstrated. The end product of this methodology at both the leased blocks is quantitative estimates of hydrate distribution. Second, at both locations there is strong evidence that the base of the GHSZ, which does not appear as a clear Bottom Simulating Reflection (BSR), manifests in the VP perturbations created by FWI, suggesting that FWI is sensitive to subtle compositional changes in shallow sediments and establishes it as a valuable tool for investigations of hydrate-bearing basins. Third, through joint interpretation of the depth migrated image and the FWI VP model, how structure and stratigraphy jointly determine hydrate and free gas distribution in both blocks could be clearly visualized. The joint interpretation also suggests that the coarse and fine grained hydrate-bearing sediments at both leased are connected. Site specific results, in addition to general results, are as follows. At GC955 the overlying fine-grained hydrate-bearing unit could have been sourced from the underlying hydrate coarse-grained channel-levee complex through a chimney feature. The channel-levee system at GC955 is compartmentalized by faults, of which only a few may be impermeable. Although compartmentalized, the channel-levee system in the GC955 as a whole might be in communication except selected zones. At WR313 the overlying fine-grained fracture-filled hydrate unit appears to be sourced from below the GHSZ. The reason that only a particular fine-grained unit has hydrate, despite having lower porosity that the bounding units, could be the presence of secondary porosity (such as those formed from clay dewatering under compaction). In conclusion, the project was a pioneering effort in in joint analysis of OBS and MCS datasets for advancing the knowledge about a hydrate and free–gas system dynamics using advanced processing methods such as FWI and depth migration. Results obtained in this project can greatly advance the tools and techniques used for delineating specific hydrate prospects. Results obtained in this project can also be seamlessly incorporated into other DOE funded project on modeling the potential productivity and commercial viability of hydrate from sand-dominated reservoirs. The OBS and MCS data in this project were acquired in 2012 (after the JIP II drilling) by the USGS and therefore the results are a posteriori. Nonetheless, the seismic inversion workflow established through this project can be used to generate various what-if quantification scenarios even in absence of logs and serve as a valuable tool for guiding drilling operations. Results from this project can augment other DOE sponsored projects on determining the commercial viability of methane production from the Gulf of Mexico.« less
COIN Project: Towards a zero-waste technology for concrete aggregate production in Norway
NASA Astrophysics Data System (ADS)
Cepuritis, Rolands; Willy Danielsen, Svein
2014-05-01
COIN Project: Towards a zero-waste technology for concrete aggregate production in Norway Rolands Cepuritis, Norcem/NTNU and Svein Willy Danielsen, SINTEF Aggregate production is a mining operation where no purification of the "ore" is necessary. Still it is extremely rare that an aggregate production plant is operating on the basis of zero-waste concept. This is since historically the fine crushed aggregate (particles with a size of less than 2, 4 or sometimes 8 mm) has been regarded as a by-product or waste of the more valuable coarse aggregate production. The reason is that the crushed coarse aggregates can easily replace coarse rounded natural stones in almost any concrete composition; while, the situation with the sand is different. The production of coarse aggregate normally yields fine fractions with rough surface texture, flaky or elongated particles an inadequate gradation. When such a material replaces smooth and rounded natural sand grains in a concrete mix, the result is usually poor and much more water and cement has to be used to achieve adequate concrete flow. The consequences are huge stockpiles of the crushed fine fractions that can't be sold (mass balance problems) for the aggregate producers, sustainability problems for the whole industry and environmental issues for society due to dumping and storing of the fine co-generated material. There have been attempts of utilising the material in concrete before; however, they have mostly ended up in failure. There have been attempts to adjust the crushed sand to the properties of the natural sand, which would still give a lot of waste, especially if the grading would have to be adjusted and the high amounts of fines abundantly present in the crushed sand would have to be removed. Another fundamental reason for failure has been that historically such attempts have mainly ended up in a research carried out by people (both industrial and academic) with aggregate background (= parties willing to find market for their crusher fines) providing only conclusions already well known by the engineers involved in concrete production. Due to the pressing situation with the left resources of the natural sand and gravel in Scandinavia, a new and different development approach has been recently attempted with the Concrete Innovation Center (COIN) in Norway. The centre is a research based innovation project that has brought together and served as a source of funding to facilitate the crucial interaction between the professionals from the different involved industries (quarrying machinery supplier, aggregate producers, concrete producers and concrete contractors) and the academic people from universities and research institutions, in order come up with a better crushed sand solution for the future. The concept under development has been a zero-waste technology for aggregate production, where instead of reducing the amount of the crushed fines their properties are rather engineered to crucially increase the overall performance of the sand in concrete. The project also involves collaboration with a state-of-the-art aggregate production plant where the new technology has already been implemented. The production process there is based on the new engineered sand concepts successfully supplying 100% all of the produced fractions to concrete and asphalt producers.
On Integral Invariants for Effective 3-D Motion Trajectory Matching and Recognition.
Shao, Zhanpeng; Li, Youfu
2016-02-01
Motion trajectories tracked from the motions of human, robots, and moving objects can provide an important clue for motion analysis, classification, and recognition. This paper defines some new integral invariants for a 3-D motion trajectory. Based on two typical kernel functions, we design two integral invariants, the distance and area integral invariants. The area integral invariants are estimated based on the blurred segment of noisy discrete curve to avoid the computation of high-order derivatives. Such integral invariants for a motion trajectory enjoy some desirable properties, such as computational locality, uniqueness of representation, and noise insensitivity. Moreover, our formulation allows the analysis of motion trajectories at a range of scales by varying the scale of kernel function. The features of motion trajectories can thus be perceived at multiscale levels in a coarse-to-fine manner. Finally, we define a distance function to measure the trajectory similarity to find similar trajectories. Through the experiments, we examine the robustness and effectiveness of the proposed integral invariants and find that they can capture the motion cues in trajectory matching and sign recognition satisfactorily.
NASA Astrophysics Data System (ADS)
Orellana, Laura; Yoluk, Ozge; Carrillo, Oliver; Orozco, Modesto; Lindahl, Erik
2016-08-01
Protein conformational changes are at the heart of cell functions, from signalling to ion transport. However, the transient nature of the intermediates along transition pathways hampers their experimental detection, making the underlying mechanisms elusive. Here we retrieve dynamic information on the actual transition routes from principal component analysis (PCA) of structurally-rich ensembles and, in combination with coarse-grained simulations, explore the conformational landscapes of five well-studied proteins. Modelling them as elastic networks in a hybrid elastic-network Brownian dynamics simulation (eBDIMS), we generate trajectories connecting stable end-states that spontaneously sample the crystallographic motions, predicting the structures of known intermediates along the paths. We also show that the explored non-linear routes can delimit the lowest energy passages between end-states sampled by atomistic molecular dynamics. The integrative methodology presented here provides a powerful framework to extract and expand dynamic pathway information from the Protein Data Bank, as well as to validate sampling methods in general.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, F.; Dai, C.; Chen, Z.
1994-05-01
A newly developed scanning tunneling microscope (STM) capable of operating at room temperature, 77 K, and 4.2 K is presented. This compact STM has a highly symmetric and rigid tunneling unit designed as an integral frame except the coarse and fine adjustment parts. The tunneling unit is incorporated into a small vacuum chamber that is usually pumped down to 2[times]10[sup [minus]4] Pa to avoid water contamination. The fine mechanic adjustment makes the tip approach the sample in 5 nm steps. The coarse adjustment not only changes the distance between the tip and the sample, but also adjusts the tip tomore » be normal to the surface of the sample. With this low-temperature STM atomic resolution images of Bi-2212 single-crystal and large-scale topographies of a YBa[sub 2]Cu[sub 3]O[sub 7] thin film are observed at 77 K.« less
Fine-scale topography in sensory systems: insights from Drosophila and vertebrates
Kaneko, Takuya; Ye, Bing
2015-01-01
To encode the positions of sensory stimuli, sensory circuits form topographic maps in the central nervous system through specific point-to-point connections between pre- and post-synaptic neurons. In vertebrate visual systems, the establishment of topographic maps involves the formation of a coarse topography followed by that of fine-scale topography that distinguishes the axon terminals of neighboring neurons. It is known that intrinsic differences in the form of broad gradients of guidance molecules instruct coarse topography while neuronal activity is required for fine-scale topography. On the other hand, studies in the Drosophila visual system have shown that intrinsic differences in cell adhesion among the axon terminals of neighboring neurons instruct the fine-scale topography. Recent studies on activity-dependent topography in the Drosophila somatosensory system have revealed a role of neuronal activity in creating molecular differences among sensory neurons for establishing fine-scale topography, implicating a conserved principle. Here we review the findings in both Drosophila and vertebrates and propose an integrated model for fine-scale topography. PMID:26091779
Fine-scale topography in sensory systems: insights from Drosophila and vertebrates.
Kaneko, Takuya; Ye, Bing
2015-09-01
To encode the positions of sensory stimuli, sensory circuits form topographic maps in the central nervous system through specific point-to-point connections between pre- and postsynaptic neurons. In vertebrate visual systems, the establishment of topographic maps involves the formation of a coarse topography followed by that of fine-scale topography that distinguishes the axon terminals of neighboring neurons. It is known that intrinsic differences in the form of broad gradients of guidance molecules instruct coarse topography while neuronal activity is required for fine-scale topography. On the other hand, studies in the Drosophila visual system have shown that intrinsic differences in cell adhesion among the axon terminals of neighboring neurons instruct the fine-scale topography. Recent studies on activity-dependent topography in the Drosophila somatosensory system have revealed a role of neuronal activity in creating molecular differences among sensory neurons for establishing fine-scale topography, implicating a conserved principle. Here we review the findings in both Drosophila and vertebrates and propose an integrated model for fine-scale topography.
Accelerated life test of sputtering and anode deposit spalling in a small mercury ion thruster
NASA Technical Reports Server (NTRS)
Power, J. L.
1975-01-01
Tantalum and molybdenum sputtered from discharge chamber components during operation of a 5 centimeter diameter mercury ion thruster adhered much more strongly to coarsely grit blasted anode surfaces than to standard surfaces. Spalling of the sputtered coating did occur from a coarse screen anode surface but only in flakes less than a mesh unit long. The results were obtained in a 200 hour accelerated life test conducted at an elevated discharge potential of 64.6 volts. The test approximately reproduced the major sputter erosion and deposition effects that occur under normal operation but at approximately 75 times the normal rate. No discharge chamber component suffered sufficient erosion in the test to threaten its structural integrity or further serviceability. The test indicated that the use of tantalum-surfaced discharge chamber components in conjunction with a fine wire screen anode surface should cure the problems of sputter erosion and sputtered deposits spalling in long term operation of small mercury ion thrusters.
Region-Oriented Placement Algorithm for Coarse-Grained Power-Gating FPGA Architecture
NASA Astrophysics Data System (ADS)
Li, Ce; Dong, Yiping; Watanabe, Takahiro
An FPGA plays an essential role in industrial products due to its fast, stable and flexible features. But the power consumption of FPGAs used in portable devices is one of critical issues. Top-down hierarchical design method is commonly used in both ASIC and FPGA design. But, in the case where plural modules are integrated in an FPGA and some of them might be in sleep-mode, current FPGA architecture cannot be fully effective. In this paper, coarse-grained power gating FPGA architecture is proposed where a whole area of an FPGA is partitioned into several regions and power supply is controlled for each region, so that modules in sleep mode can be effectively power-off. We also propose a region oriented FPGA placement algorithm fitted to this user's hierarchical design based on VPR[1]. Simulation results show that this proposed method could reduce power consumption of FPGA by 38% on average by setting unused modules or regions in sleep mode.
NASA Astrophysics Data System (ADS)
Lafontaine, J.; Hay, L.; Viger, R.; Markstrom, S. L.
2010-12-01
In order to help environmental resource managers assess potential effects of climate change on ecosystems, the Southeast Regional Assessment Project (SERAP) began in 2009. One component of the SERAP is development and calibration of a set of multi-resolution hydrologic models of the Apalachicola-Chattahoochee-Flint (ACF) River Basin. The ACF River Basin is home to multiple fish and wildlife species of conservation concern, is regionally important for water supply, and has been a recent focus of complementary environmental and climate-change research. Hydrologic models of varying spatial extents and resolutions are required to address varied local to regional water-resource management questions as required by the scope and limits of potential management actions. These models were developed using the U.S. Geological Survey (USGS) Precipitation Runoff Modeling System (PRMS). The coarse-resolution model for the ACF Basin has a contributing area of approximately 19,200 mi2 with the model outlet located at the USGS streamflow gage on the Apalachicola River near Sumatra, Florida. Six fine-resolution PRMS models ranging in size from 153 mi2 to 1,040 mi2 are nested within the coarse-scale model, and have been developed for the following basins: upper Chattahoochee, Chestatee, and Chipola Rivers, Ichawaynochaway, Potato, and Spring Creeks. All of the models simulate basin hydrology using a daily time-step, measured climate data, and basin characteristics such as land cover and topography. Measured streamflow data are used to calibrate and evaluate computed basin hydrology. Land cover projections will be used in conjunction with downscaled Global Climate Model results to project future hydrologic conditions for this set of models.
Ultra-Compact Multitip Scanning Probe Microscope with an Outer Diameter of 50 mm
NASA Astrophysics Data System (ADS)
Cherepanov, Vasily; Zubkov, Evgeny; Junker, Hubertus; Korte, Stefan; Blab, Marcus; Coenen, Peter; Voigtländer, Bert
We present a multitip scanning tunneling microscope (STM) where four independent STM units are integrated on a diameter of 50 mm. The coarse positioning of the tips is done under the control of an optical microscope or an SEM in vacuum. The heart of this STM is a new type of piezoelectric coarse approach called Koala Drive which can have a diameter greater than 2.5 mm and a length smaller than 10 mm. Alternating movements of springs move a central tube which holds the STM tip or AFM sensor. This new operating principle provides a smooth travel sequence and avoids shaking which is intrinsically present for nanopositioners based on inertial motion with saw tooth driving signals. Inserting the Koala Drive in a piezo tube for xyz-scanning integrates a complete STM inside a 4 mm outer diameter piezo tube of <10 mm length. The use of the Koala Drive makes the scanning probe microscopy design ultra-compact and accordingly leads to a high mechanical stability. The drive is UHV, low temperature, and magnetic field compatible. The compactness of the Koala Drive allows building a four-tip STM as small as a single-tip STM with a drift of <0.2 nm/min and lowest resonance frequencies of 2.5 (xy) and 5.5 kHz (z). We present examples of the performance of the multitip STM designed using the Koala Drive.
A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.
Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J
2009-11-28
In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.
Finding regions of interest in pathological images: an attentional model approach
NASA Astrophysics Data System (ADS)
Gómez, Francisco; Villalón, Julio; Gutierrez, Ricardo; Romero, Eduardo
2009-02-01
This paper introduces an automated method for finding diagnostic regions-of-interest (RoIs) in histopathological images. This method is based on the cognitive process of visual selective attention that arises during a pathologist's image examination. Specifically, it emulates the first examination phase, which consists in a coarse search for tissue structures at a "low zoom" to separate the image into relevant regions.1 The pathologist's cognitive performance depends on inherent image visual cues - bottom-up information - and on acquired clinical medicine knowledge - top-down mechanisms -. Our pathologist's visual attention model integrates the latter two components. The selected bottom-up information includes local low level features such as intensity, color, orientation and texture information. Top-down information is related to the anatomical and pathological structures known by the expert. A coarse approximation to these structures is achieved by an oversegmentation algorithm, inspired by psychological grouping theories. The algorithm parameters are learned from an expert pathologist's segmentation. Top-down and bottom-up integration is achieved by calculating a unique index for each of the low level characteristics inside the region. Relevancy is estimated as a simple average of these indexes. Finally, a binary decision rule defines whether or not a region is interesting. The method was evaluated on a set of 49 images using a perceptually-weighted evaluation criterion, finding a quality gain of 3dB when comparing to a classical bottom-up model of attention.
Müller, Erich A; Jackson, George
2014-01-01
A description of fluid systems with molecular-based algebraic equations of state (EoSs) and by direct molecular simulation is common practice in chemical engineering and the physical sciences, but the two approaches are rarely closely coupled. The key for an integrated representation is through a well-defined force field and Hamiltonian at the molecular level. In developing coarse-grained intermolecular potential functions for the fluid state, one typically starts with a detailed, bottom-up quantum-mechanical or atomic-level description and then integrates out the unwanted degrees of freedom using a variety of techniques; an iterative heuristic simulation procedure is then used to refine the parameters of the model. By contrast, with a top-down technique, one can use an accurate EoS to link the macroscopic properties of the fluid and the force-field parameters. We discuss the latest developments in a top-down representation of fluids, with a particular focus on a group-contribution formulation of the statistical associating fluid theory (SAFT-γ). The accurate SAFT-γ EoS is used to estimate the parameters of the Mie force field, which can then be used with confidence in direct molecular simulations to obtain thermodynamic, structural, interfacial, and dynamical properties that are otherwise inaccessible from the EoS. This is exemplified for several prototypical fluids and mixtures, including carbon dioxide, hydrocarbons, perfluorohydrocarbons, and aqueous surfactants.
The fast multipole method and point dipole moment polarizable force fields.
Coles, Jonathan P; Masella, Michel
2015-01-14
We present an implementation of the fast multipole method for computing Coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of fast multipole method combined with state-of-the-art chemical models in molecular dynamical systems.
The Aerosol Coarse Mode Initiative
NASA Astrophysics Data System (ADS)
Arnott, W. P.; Adhikari, N.; Air, D.; Kassianov, E.; Barnard, J.
2014-12-01
Many areas of the world show an aerosol volume distribution with a significant coarse mode and sometimes a dominant coarse mode. The large coarse mode is usually due to dust, but sea salt aerosol can also play an important role. However, in many field campaigns, the coarse mode tends to be ignored, because it is difficult to measure. This lack of measurements leads directly to a concomitant "lack of analysis" of this mode. Because, coarse mode aerosols can have significant effects on radiative forcing, both in the shortwave and longwave spectrum, the coarse mode -- and these forcings -- should be accounted for in atmospheric models. Forcings based only on fine mode aerosols have the potential to be misleading. In this paper we describe examples of large coarse modes that occur in areas of large aerosol loading (Mexico City, Barnard et al., 2010) as well as small loadings (Sacramento, CA; Kassianov et al., 2012; and Reno, NV). We then demonstrate that: (1) the coarse mode can contribute significantly to radiative forcing, relative to the fine mode, and (2) neglecting the coarse mode may result in poor comparisons between measurements and models. Next we describe -- in general terms -- the limitations of instrumentation to measure the coarse mode. Finally, we suggest a new initiative aimed at examining coarse mode aerosol generation mechanisms; transport and deposition; chemical composition; visible and thermal IR refractive indices; morphology; microphysical behavior when deposited on snow and ice; and specific instrumentation needs. Barnard, J. C., J. D. Fast, G. Paredes-Miranda, W. P. Arnott, and A. Laskin, 2010: Technical Note: Evaluation of the WRF-Chem "Aerosol Chemical to Aerosol Optical Properties" Module using data from the MILAGRO campaign, Atmospheric Chemistry and Physics, 10, 7325-7340. Kassianov, E. I., M. S. Pekour, and J. C. Barnard, 2012: Aerosols in Central California: Unexpectedly large contribution of coarse mode to aerosol radiative forcing, Geophys. Res. Lett., 39, L20806, doi:10.1029/2012GL053469.
Garion, Liora; Dubin, Uri; Rubin, Yoav; Khateb, Mohamed; Schiller, Yitzhak; Azouz, Rony; Schiller, Jackie
2014-01-01
Texture discrimination is a fundamental function of somatosensory systems, yet the manner by which texture is coded and spatially represented in the barrel cortex are largely unknown. Using in vivo two-photon calcium imaging in the rat barrel cortex during artificial whisking against different surface coarseness or controlled passive whisker vibrations simulating different coarseness, we show that layer 2–3 neurons within barrel boundaries differentially respond to specific texture coarsenesses, while only a minority of neurons responded monotonically with increased or decreased surface coarseness. Neurons with similar preferred texture coarseness were spatially clustered. Multi-contact single unit recordings showed a vertical columnar organization of texture coarseness preference in layer 2–3. These findings indicate that layer 2–3 neurons perform high hierarchical processing of tactile information, with surface coarseness embodied by distinct neuronal subpopulations that are spatially mapped onto the barrel cortex. DOI: http://dx.doi.org/10.7554/eLife.03405.001 PMID:25233151
The decomposition of fine and coarse roots: their global patterns and controlling factors
Zhang, Xinyue; Wang, Wei
2015-01-01
Fine root decomposition represents a large carbon (C) cost to plants, and serves as a potential soil C source, as well as a substantial proportion of net primary productivity. Coarse roots differ markedly from fine roots in morphology, nutrient concentrations, functions, and decomposition mechanisms. Still poorly understood is whether a consistent global pattern exists between the decomposition of fine (<2 mm root diameter) and coarse (≥2 mm) roots. A comprehensive terrestrial root decomposition dataset, including 530 observations from 71 sampling sites, was thus used to compare global patterns of decomposition of fine and coarse roots. Fine roots decomposed significantly faster than coarse roots in middle latitude areas, but their decomposition in low latitude regions was not significantly different from that of coarse roots. Coarse root decomposition showed more dependence on climate, especially mean annual temperature (MAT), than did fine roots. Initial litter lignin content was the most important predictor of fine root decomposition, while lignin to nitrogen ratios, MAT, and mean annual precipitation were the most important predictors of coarse root decomposition. Our study emphasizes the necessity of separating fine roots and coarse roots when predicting the response of belowground C release to future climate changes. PMID:25942391
[Intervention of coarse cereals on lipid metabolism in rats].
Guo, Yanbo; Zhai, Chengkai; Wang, Yanli; Zhang, Qun; Ding, Zhoubo; Jin, Xin
2010-03-01
To observe the effect of coarse cereals on improving the disorder of lipid metabolism and the expression of PPARgamma mRNA in white adipose tissue in rats to investigate the mechanism of coarse cereals on lipid metabolism disorder. Forty four SPF rats were randomly divided into 4 groups: the negative control group was fed with normal diet and 3 experimental groups were fed with high-fat modeling diet for 6 weeks for model building. The 3 experimental groups, the coarse cereals group,rice-flour group and the hyperlipemia model group, were then fed with coarse cereals high-fat diet,rice-flour high-diet and high-fat modeling diet respectively for another 15 weeks. Compared with the hyperlipemia modeling group, serum TG, TC, IL-6 and TNF-alpha in the coarse cereals group were declined significantly (P < 0.05), serum HDL-C in coarse cereals group was higher than that in rice-flour group and hyperlipemia model group (P < 0.05), LPL, HL and TNF-alpha in coarse cereal group were close to the negative control group. Moreover, the expression of PPAR-gamma mRNA in white adipose tissue of the coarse cereals group was higher than other groups. The coarse cereals could activate PPARgamma and enhance the activity of key enzymes in lipids metabolism, so as to reduce the level of TG relieve inflammation and improve lipid dysmetabolism eventually.
EFFECT OF SITE ON BACTERIAL POPULATIONS IN THE SAPWOOD OF COARSE WOODY DEBRIS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, Emma, G.,; Waldrop, Thomas, A.; McElreath, Susan, D.
1998-01-01
Porter, Emma G., T.A. Waldrop, Susan D. McElreath, and Frank H. Tainter. 1998. Effect of site on bacterial populations in the sapwood of coarse woody debris. Pp. 480-484. In: Proc. 9th Bienn. South. Silv. Res. Conf. T.A. Waldrop (ed). USDA Forest Service, Southern Research Station. Gen. Tech. Rep. SRS-20. Abstract: Coarse woody debris (CWD) is an important structural component of southeastern forest ecosystems, yet little is known about its dynamics in these systems. This project identified bacterial populations associated with CWD and their dynamics across landscape ecosystem classification (LEC) units. Bolts of red oak and loblolly pine were placed onmore » plots at each of three hydric, mesic, and xeric sites at the Savannah River Station. After the controls were processed, samples were taken at four intervals over a 16-week period. Samples were ground within an anaerobe chamber using nonselective media. Aerobic and facultative anaerobic bacteria were identified using the Biolog system and the anaerobes were identified using the API 20A system. Major genera isolated were: Bacillus, Buttiauxella, Cedecea, Enterobacter, Erwinia, Escherichia, Klebsiella, Pantoea, Pseudomonas, Serratia, and Xanthomonas. The mean total isolates were determined by LEC units and sample intervals. Differences occurred between the sample intervals with total isolates of 6.67, 13.33, 10.17, and 9.50 at 3, 6, 10, and 16 weeks, respectively. No significant differences in the numbers of bacteria isolated were found between LEC units.« less
Building Excellence in Project Execution: Integrated Project Management
2015-04-30
challenge by adopting and refining the CMMI Model and building the tenets of integrated project management (IPM) into project planning and execution...Systems Center Pacific (SSC Pacific) is addressing this challenge by adopting and refining the CMMI Model, and building the tenets of integrated project...successfully managing stakeholder expectations and meeting requirements. Under the Capability Maturity Model Integration ( CMMI ), IPM is defined as
Foo, Patrick; Warren, William H; Duchon, Andrew; Tarr, Michael J
2005-03-01
Do humans integrate experience on specific routes into metric survey knowledge of the environment, or do they depend on a simpler strategy of landmark navigation? The authors tested this question using a novel shortcut paradigm during walking in a virtual environment. The authors find that participants could not take successful shortcuts in a desert world but could do so with dispersed landmarks in a forest. On catch trials, participants were drawn toward the displaced landmarks whether the landmarks were clustered near the target location or along the shortcut route. However, when landmarks appeared unreliable, participants fell back on coarse survey knowledge. Like honeybees (F. C. Dyer, 1991), humans do not appear to derive accurate cognitive maps from path integration to guide navigation but, instead, depend on landmarks when they are available.
NASA Astrophysics Data System (ADS)
Wouters, Hendrik; Vanden Broucke, Sam; van Lipzig, Nicole; Demuzere, Matthias
2016-04-01
Recent research clearly show that climate modelling at high resolution - which resolve the deep convection, the detailed orography and land-use including urbanization - leads to better modelling performance with respect to temperatures, the boundary-layer, clouds and precipitation. The increasing computational power enables the climate research community to address climate-change projections with higher accuracy and much more detail. In the framework of the CORDEX.be project aiming for coherent high-resolution micro-ensemble projections for Belgium employing different GCMs and RCMs, the KU Leuven contributes by means of the downscaling of EC-EARTH global climate model projections (provided by the Royal Meteorological Institute of the Netherlands) to the Belgian domain. The downscaling is obtained with regional climate simulations at 12.5km resolution over Europe (CORDEX-EU domain) and at 2.8km resolution over Belgium (CORDEX.be domain) using COSMO-CLM coupled to urban land-surface parametrization TERRA_URB. This is done for the present-day (1975-2005) and future (2040 → 2070 and 2070 → 2100). In these high-resolution runs, both GHG changes (in accordance to RCP8.5) and urban land-use changes (in accordance to a business-as-usual urban expansion scenario) are taken into account. Based on these simulations, it is shown how climate-change statistics are modified when going from coarse resolution modelling to high-resolution modelling. The climate-change statistics of particular interest are the changes in number of extreme precipitation events and extreme heat waves in cities. Hereby, it is futher investigated for the robustness of the signal change between the course and high-resolution and whether a (statistical) translation is possible. The different simulations also allow to address the relative impact and synergy between the urban expansion and increased GHG on the climate-change statistics. Hereby, it is investigated for which climate-change statistics the urban heat island and urban expansion is relevant, and to what extent the urban expansion can be included in the coarse-to-high resolution translation.
Eigenvector decomposition of full-spectrum x-ray computed tomography.
Gonzales, Brian J; Lalush, David S
2012-03-07
Energy-discriminated x-ray computed tomography (CT) data were projected onto a set of basis functions to suppress the noise in filtered back-projection (FBP) reconstructions. The x-ray CT data were acquired using a novel x-ray system which incorporated a single-pixel photon-counting x-ray detector to measure the x-ray spectrum for each projection ray. A matrix of the spectral response of different materials was decomposed using eigenvalue decomposition to form the basis functions. Projection of FBP onto basis functions created a de facto image segmentation of multiple contrast agents. Final reconstructions showed significant noise suppression while preserving important energy-axis data. The noise suppression was demonstrated by a marked improvement in the signal-to-noise ratio (SNR) along the energy axis for multiple regions of interest in the reconstructed images. Basis functions used on a more coarsely sampled energy axis still showed an improved SNR. We conclude that the noise-resolution trade off along the energy axis was significantly improved using the eigenvalue decomposition basis functions.
Stafford, Ben K; Sher, Alexander; Litke, Alan M; Feldheim, David A
2009-10-29
During development, retinal axons project coarsely within their visual targets before refining to form organized synaptic connections. Spontaneous retinal activity, in the form of acetylcholine-driven retinal waves, is proposed to be necessary for establishing these projection patterns. In particular, both axonal terminations of retinal ganglion cells (RGCs) and the size of receptive fields of target neurons are larger in mice that lack the beta2 subunit of the nicotinic acetylcholine receptor (beta2KO). Here, using a large-scale, high-density multielectrode array to record activity from hundreds of RGCs simultaneously, we present analysis of early postnatal retinal activity from both wild-type (WT) and beta2KO retinas. We find that beta2KO retinas have correlated patterns of activity, but many aspects of these patterns differ from those of WT retina. Quantitative analysis suggests that wave directionality, coupled with short-range correlated bursting patterns of RGCs, work together to refine retinofugal projections.
Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT
Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster
2016-01-01
Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701
Electrical resisitivity of mechancially stablized earth wall backfill
NASA Astrophysics Data System (ADS)
Snapp, Michael; Tucker-Kulesza, Stacey; Koehn, Weston
2017-06-01
Mechanically stabilized earth (MSE) retaining walls utilized in transportation projects are typically backfilled with coarse aggregate. One of the current testing procedures to select backfill material for construction of MSE walls is the American Association of State Highway and Transportation Officials standard T 288: ;Standard Method of Test for Determining Minimum Laboratory Soil Resistivity.; T 288 is designed to test a soil sample's electrical resistivity which correlates to its corrosive potential. The test is run on soil material passing the No. 10 sieve and believed to be inappropriate for coarse aggregate. Therefore, researchers have proposed new methods to measure the electrical resistivity of coarse aggregate samples in the laboratory. There is a need to verify that the proposed methods yield results representative of the in situ conditions; however, no in situ measurement of the electrical resistivity of MSE wall backfill is established. Electrical resistivity tomography (ERT) provides a two-dimensional (2D) profile of the bulk resistivity of backfill material in situ. The objective of this study was to characterize bulk resistivity of in-place MSE wall backfill aggregate using ERT. Five MSE walls were tested via ERT to determine the bulk resistivity of the backfill. Three of the walls were reinforced with polymeric geogrid, one wall was reinforced with metallic strips, and one wall was a gravity retaining wall with no reinforcement. Variability of the measured resistivity distribution within the backfill may be a result of non-uniform particle sizes, thoroughness of compaction, and the presence of water. A quantitative post processing algorithm was developed to calculate mean bulk resistivity of in-situ backfill. Recommendations of the study were that the ERT data be used to verify proposed testing methods for coarse aggregate that are designed to yield data representative of in situ conditions. A preliminary analysis suggests that ERT may be utilized as construction quality assurance for thoroughness of compaction in MSE construction; however more data are needed at this time.
Patry, Cynthia; Davidson, Robert; Lucotte, Marc; Béliveau, Annie
2013-08-01
Recent research on slash-and-burn agriculture conducted in the Amazonian basin has suggested that soils must be left under forested fallows for at least 10 to 15 years to regain fertility levels comparable to non-disturbed forests in order to allow for short cycle crop cultivation. However, small scale farmers tend nowadays to re-burn secondary forests as soon as after 3 to 5 years, thus could contribute to further reduce soil fertility and could enhance the transfer of mercury (Hg) naturally present in soils of the region towards water courses. The present research project sets out to characterize the impact of forested fallows of differing age and land-use history on soils properties (fertility and Hg contents) in the region of the Tapajós River, an active pioneer front of the Brazilian Amazon. To do this, soil samples in forested fallows of variable age and in control primary forests were retrieved. In general, soil fertility of grouped forested fallows of different ages was similar to that of the primary forests. But when discriminating soils according to their texture, forested fallows on coarse grained soils still had much higher NH4/NO3 ratios, NH4 and Ca contents than primary forests, this even 15 years after burning. The impact of repeated burnings was also assessed. Fallows on coarse grained soils showed an impoverishment for all variables related to fertility when the number of burnings was 5 or more. For fallows on fine grained soils that underwent 5 or more burnings, NO3 contents were low although a cation enrichment was observed. Total soil Hg content was also sensitive to repeated burnings, showing similar losses for forested fallows established on both types of soil. However, Hg linked to coarse particles appeared to migrate back towards fine particles at the surface of coarse grained soils in fallows older than 7 years. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fix, N. J.
The scope of the CH2M Hill Plateau Remediation Company, LLC (CHPRC) Groundwater and Technical Integration Support (Master Project) is for Pacific Northwest National Laboratory staff to provide technical and integration support to CHPRC. This work includes conducting investigations at the 300-FF-5 Operable Unit and other groundwater operable units, and providing strategic integration, technical integration and assessments, remediation decision support, and science and technology. The projects under this Master Project will be defined and included within the Master Project throughout the fiscal year, and will be incorporated into the Master Project Plan. This Quality Assurance Management Plan provides the quality assurancemore » requirements and processes that will be followed by the CHPRC Groundwater and Technical Integration Support (Master Project) and all releases associated with the CHPRC Soil and Groundwater Remediation Project. The plan is designed to be used exclusively by project staff.« less
Project for Integration of Pupils with Special Needs in Spain.
ERIC Educational Resources Information Center
Marchesi, Alvaro
1986-01-01
This paper analyzes a project approved by the Spanish government in 1985 to integrate special needs children into regular education. Outlined are characteristics of the Spanish educational system, parameters of practice in the integration project, and plans for the systematic evaluation of the integration project. (Author/JDD)
Zhang, Jiarui; Zhang, Yingjie; Chen, Bo
2017-12-20
The three-dimensional measurement system with a binary defocusing technique is widely applied in diverse fields. The measurement accuracy is mainly determined by out-of-focus projector calibration accuracy. In this paper, a high-precision out-of-focus projector calibration method that is based on distortion correction on the projection plane and nonlinear optimization algorithm is proposed. To this end, the paper experimentally presents the principle that the projector has noticeable distortions outside its focus plane. In terms of this principle, the proposed method uses a high-order radial and tangential lens distortion representation on the projection plane to correct the calibration residuals caused by projection distortion. The final accuracy parameters of out-of-focus projector were obtained using a nonlinear optimization algorithm with good initial values, which were provided by coarsely calibrating the parameters of the out-of-focus projector on the focal and projection planes. Finally, the experimental results demonstrated that the proposed method can accuracy calibrate an out-of-focus projector, regardless of the amount of defocusing.
Coarse-graining using the relative entropy and simplex-based optimization methods in VOTCA
NASA Astrophysics Data System (ADS)
Rühle, Victor; Jochum, Mara; Koschke, Konstantin; Aluru, N. R.; Kremer, Kurt; Mashayak, S. Y.; Junghans, Christoph
2014-03-01
Coarse-grained (CG) simulations are an important tool to investigate systems on larger time and length scales. Several methods for systematic coarse-graining were developed, varying in complexity and the property of interest. Thus, the question arises which method best suits a specific class of system and desired application. The Versatile Object-oriented Toolkit for Coarse-graining Applications (VOTCA) provides a uniform platform for coarse-graining methods and allows for their direct comparison. We present recent advances of VOTCA, namely the implementation of the relative entropy method and downhill simplex optimization for coarse-graining. The methods are illustrated by coarse-graining SPC/E bulk water and a water-methanol mixture. Both CG models reproduce the pair distributions accurately. SYM is supported by AFOSR under grant 11157642 and by NSF under grant 1264282. CJ was supported in part by the NSF PHY11-25915 at KITP. K. Koschke acknowledges funding by the Nestle Research Center.
Design and implementation of Gm-APD array readout integrated circuit for infrared 3D imaging
NASA Astrophysics Data System (ADS)
Zheng, Li-xia; Yang, Jun-hao; Liu, Zhao; Dong, Huai-peng; Wu, Jin; Sun, Wei-feng
2013-09-01
A single-photon detecting array of readout integrated circuit (ROIC) capable of infrared 3D imaging by photon detection and time-of-flight measurement is presented in this paper. The InGaAs avalanche photon diodes (APD) dynamic biased under Geiger operation mode by gate controlled active quenching circuit (AQC) are used here. The time-of-flight is accurately measured by a high accurate time-to-digital converter (TDC) integrated in the ROIC. For 3D imaging, frame rate controlling technique is utilized to the pixel's detection, so that the APD related to each pixel should be controlled by individual AQC to sense and quench the avalanche current, providing a digital CMOS-compatible voltage pulse. After each first sense, the detector is reset to wait for next frame operation. We employ counters of a two-segmental coarse-fine architecture, where the coarse conversion is achieved by a 10-bit pseudo-random linear feedback shift register (LFSR) in each pixel and a 3-bit fine conversion is realized by a ring delay line shared by all pixels. The reference clock driving the LFSR counter can be generated within the ring delay line Oscillator or provided by an external clock source. The circuit is designed and implemented by CSMC 0.5μm standard CMOS technology and the total chip area is around 2mm×2mm for 8×8 format ROIC with 150μm pixel pitch. The simulation results indicate that the relative time resolution of the proposed ROIC can achieve less than 1ns, and the preliminary test results show that the circuit function is correct.
Coarse woody debris: Managing benefits and fire hazard in the recovering forest
James K. Brown; Elizabeth D. Reinhardt; Kylie A. Kramer
2003-01-01
Management of coarse woody debris following fire requires consideration of its positive and negative values. The ecological benefits of coarse woody debris and fire hazard considerations are summarized. This paper presents recommendations for desired ranges of coarse woody debris. Example simulations illustrate changes in debris over time and with varying management....
Coarse-graining errors and numerical optimization using a relative entropy framework
NASA Astrophysics Data System (ADS)
Chaimovich, Aviel; Shell, M. Scott
2011-03-01
The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, Srel, that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.
Low-cost solar array project and Proceedings of the 14th Project Integration Meeting
NASA Technical Reports Server (NTRS)
Mcdonald, R. R.
1980-01-01
Activities are reported on the following areas: project analysis and integration; technology development in silicon material, large area sheet silicon, and encapsulation; production process and equipment development; and engineering and operations, and the steps taken to integrate these efforts. Visual materials presented at the project Integration Meeting are included.
Coarse-sediment bands on the inner shelf of southern Monterey Bay, California
Hunter, R.E.; Dingler, J.R.; Anima, R.J.; Richmond, B.M.
1988-01-01
Bands of coarse sand that trend parallel to the shore, unlike the approximately shore-normal bands found in many inner shelf areas, occur in southern Monterey Bay at water depths of 10-20 m, less than 1 km from the shore. The bands are 20-100 m wide and alternate with bands of fine sand that are of similar width. The coarse-sand bands are as much as 1 m lower than the adjacent fine-sand bands, which have margins inclined at angles of about 20??. The mean grain sizes of the coarse and fine sand are in the range of 0.354-1.0 mm and 0.125-0.354 mm, respectively. Wave ripples that average about 1 m in spacing always occur in the coarse-sand bands. Over a period of 3 yrs, the individual bands moved irregularly and changed in shape, as demonstrated by repeated sidescan sonar surveys and by the monitoring of rods jetted into the sea floor. However, the overall pattern and distribution of the bands remained essentially unchanged. Cores, 0.5-1.0 m long, taken in coarse-sand bands contain 0.2-0.5 m of coarse sand overlying fine sand or interbedded fine and coarse sand. Cores from fine-sand bands have at least one thin coarse sand layer at about the depth of the adjacent coarse-sand band. None of the cores revealed a thick deposit of coarse sand. The shore-parallel bands are of unknown origin. Their origin is especially puzzling because approximately shore-normal bands are present in parts of the study area and immediately to the north. ?? 1988.
Stochastic inflation in phase space: is slow roll a stochastic attractor?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grain, Julien; Vennin, Vincent, E-mail: julien.grain@ias.u-psud.fr, E-mail: vincent.vennin@port.ac.uk
An appealing feature of inflationary cosmology is the presence of a phase-space attractor, ''slow roll'', which washes out the dependence on initial field velocities. We investigate the robustness of this property under backreaction from quantum fluctuations using the stochastic inflation formalism in the phase-space approach. A Hamiltonian formulation of stochastic inflation is presented, where it is shown that the coarse-graining procedure—where wavelengths smaller than the Hubble radius are integrated out—preserves the canonical structure of free fields. This means that different sets of canonical variables give rise to the same probability distribution which clarifies the literature with respect to this issue.more » The role played by the quantum-to-classical transition is also analysed and is shown to constrain the coarse-graining scale. In the case of free fields, we find that quantum diffusion is aligned in phase space with the slow-roll direction. This implies that the classical slow-roll attractor is immune to stochastic effects and thus generalises to a stochastic attractor regardless of initial conditions, with a relaxation time at least as short as in the classical system. For non-test fields or for test fields with non-linear self interactions however, quantum diffusion and the classical slow-roll flow are misaligned. We derive a condition on the coarse-graining scale so that observational corrections from this misalignment are negligible at leading order in slow roll.« less
NASA Astrophysics Data System (ADS)
Wengel, C.; Latif, M.; Park, W.; Harlaß, J.; Bayr, T.
2018-05-01
A long-standing difficulty of climate models is to capture the annual cycle (AC) of eastern equatorial Pacific (EEP) sea surface temperature (SST). In this study, we first examine the EEP SST AC in a set of integrations of the coupled Kiel Climate Model, in which only atmosphere model resolution differs. When employing coarse horizontal and vertical atmospheric resolution, significant biases in the EEP SST AC are observed. These are reflected in an erroneous timing of the cold tongue's onset and termination as well as in an underestimation of the boreal spring warming amplitude. A large portion of these biases are linked to a wrong simulation of zonal surface winds, which can be traced back to precipitation biases on both sides of the equator and an erroneous low-level atmospheric circulation over land. Part of the SST biases also is related to shortwave radiation biases related to cloud cover biases. Both wind and cloud cover biases are inherent to the atmospheric component, as shown by companion uncoupled atmosphere model integrations forced by observed SSTs. Enhancing atmosphere model resolution, horizontal and vertical, markedly reduces zonal wind and cloud cover biases in coupled as well as uncoupled mode and generally improves simulation of the EEP SST AC. Enhanced atmospheric resolution reduces convection biases and improves simulation of surface winds over land. Analysis of a subset of models from the Coupled Model Intercomparison Project phase 5 (CMIP5) reveals that in these models, very similar mechanisms are at work in driving EEP SST AC biases.
NASA Astrophysics Data System (ADS)
Scaranello, M. A., Sr.; Keller, M. M.; dos-Santos, M. N.; Longo, M.; Pinagé, E. R.; Leitold, V.
2016-12-01
Coarse woody debris is an important but infrequently quantified carbon pool in tropical forests. Based on studies at 12 sites spread across the Brazilian Amazon, we quantified coarse woody debris stocks in intact forests and forests affected by different intensities of degradation by logging and/or fire. Measurement were made in-situ and for the first time field measurements of coarse woody debris were related to structural metrics derived from airborne lidar. Using the line-intercept method we established 84 transects for sampling fallen coarse woody debris and associated inventory plots for sampling standing dead wood in intact, conventional logging, reduced impact logging, burned and burned after logging forests. Overall mean and standard deviation of total coarse woody debris were 50.0 Mg ha-1 and 26.4 Mg ha-1 respectively. Forest degradation increased coarse woody debris stocks compared to intact forests by a factor of 1.7 in reduced impact logging forests and up to 3-fold in burned forests, in a side-by-side comparison of nearby areas. The ratio between coarse woody debris and biomass increased linearly with number of degradation events (R²: 0.67, p<0.01). Individual lidar-derived structural variables strongly correlated with coarse woody debris in intact and reduced impact logging forests: the 5th percentile of last returns for in intact forests (R²: 0.78, p<0.01) and forest gap area, mapped using lidar-derived canopy height model, for reduced impact logging forests (R²: 0.63, p<0.01). Individual gap area also played a weak but significant role in determining coarse woody debris in burned forests (R2: 0.21, p<0.05), but with contrasting trend. Both degradation-specific and general multiple models using lidar-derived variables were good predictor of coarse woody debris stocks in different degradation levels in the Brazilian Amazon. The strong relation of coarse woody debris with lidar derived structural variables suggests an approach for quantifying infrequently measured coarse woody debris over large areas.
Flux front penetration in disordered superconductors.
Zapperi, S; Moreira, A A; Andrade, J S
2001-04-16
We investigate flux front penetration in a disordered type-II superconductor by molecular dynamics simulations of interacting vortices and find scaling laws for the front position and the density profile. The scaling can be understood by performing a coarse graining of the system and writing a disordered nonlinear diffusion equation. Integrating numerically the equation, we observe a crossover from flat to fractal front penetration as the system parameters are varied. The value of the fractal dimension indicates that the invasion process is described by gradient percolation.
NASA Technical Reports Server (NTRS)
Wolfe, Jean; Bauer, Jeff; Bixby, C.J.; Lauderdale, Todd; Shively, Jay; Griner, James; Hayhurst, Kelly
2010-01-01
Topics discussed include: Aeronautics Research Mission Directorate Integrated Systems Research Program (ISRP) and UAS Integration in the NAS Project; UAS Integration into the NAS Project; Separation Assurance and Collision Avoidance; Pilot Aircraft Interface Objectives/Rationale; Communication; Certification; and Integrated Tests and Evaluations.
Relative entropy and optimization-driven coarse-graining methods in VOTCA
Mashayak, S. Y.; Jochum, Mara N.; Koschke, Konstantin; ...
2015-07-20
We discuss recent advances of the VOTCA package for systematic coarse-graining. Two methods have been implemented, namely the downhill simplex optimization and the relative entropy minimization. We illustrate the new methods by coarse-graining SPC/E bulk water and more complex water-methanol mixture systems. The CG potentials obtained from both methods are then evaluated by comparing the pair distributions from the coarse-grained to the reference atomistic simulations.We have also added a parallel analysis framework to improve the computational efficiency of the coarse-graining process.
Role of translational entropy in spatially inhomogeneous, coarse-grained models
NASA Astrophysics Data System (ADS)
Langenberg, Marcel; Jackson, Nicholas E.; de Pablo, Juan J.; Müller, Marcus
2018-03-01
Coarse-grained models of polymer and biomolecular systems have enabled the computational study of cooperative phenomena, e.g., self-assembly, by lumping multiple atomistic degrees of freedom along the backbone of a polymer, lipid, or DNA molecule into one effective coarse-grained interaction center. Such a coarse-graining strategy leaves the number of molecules unaltered. In order to treat the surrounding solvent or counterions on the same coarse-grained level of description, one can also stochastically group several of those small molecules into an effective, coarse-grained solvent bead or "fluid element." Such a procedure reduces the number of molecules, and we discuss how to compensate the concomitant loss of translational entropy by density-dependent interactions in spatially inhomogeneous systems.
Coarse graining for synchronization in directed networks
NASA Astrophysics Data System (ADS)
Zeng, An; Lü, Linyuan
2011-05-01
Coarse-graining model is a promising way to analyze and visualize large-scale networks. The coarse-grained networks are required to preserve statistical properties as well as the dynamic behaviors of the initial networks. Some methods have been proposed and found effective in undirected networks, while the study on coarse-graining directed networks lacks of consideration. In this paper we proposed a path-based coarse-graining (PCG) method to coarse grain the directed networks. Performing the linear stability analysis of synchronization and numerical simulation of the Kuramoto model on four kinds of directed networks, including tree networks and variants of Barabási-Albert networks, Watts-Strogatz networks, and Erdös-Rényi networks, we find our method can effectively preserve the network synchronizability.
NASA Astrophysics Data System (ADS)
Mamali, Dimitra; Marinou, Eleni; Pikridas, Michael; Kottas, Michael; Binietoglou, Ioannis; Kokkalis, Panagiotis; Tsekeri, Aleksandra; Amiridis, Vasilis; Sciare, Jean; Keleshis, Christos; Engelmann, Ronny; Ansmann, Albert; Russchenberg, Herman W. J.; Biskos, George
2017-04-01
Vertical profiles of the aerosol mass concentration derived from light detection and ranging (lidar) measurements were compared to airborne dried optical particle counter (OPC MetOne; Model 212) measurements during the INUIT-BACCHUS-ACTRIS campaign. The campaign took place in April 2016 and its main focus was the study of aerosol dust particles. During the campaign the NOA Polly-XT Raman lidar located at Nicosia (35.08° N, 33.22° E) was providing round-the-clock vertical profiles of aerosol optical properties. In addition, an unmanned aerial vehicle (UAV) carrying an OPC flew on 7 days during the first morning hours. The flights were performed at Orounda (35.1018° N, 33.0944° E) reaching altitudes of 2.5 km a.s.l, which allows comparison with a good fraction of the recorded lidar data. The polarization lidar photometer networking method (POLIPHON) was used for the estimation of the fine (non-dust) and coarse (dust) mode aerosol mass concentration profiles. This method uses as input the particle backscatter coefficient and the particle depolarization profiles of the lidar at 532 nm wavelength and derives the aerosol mass concentration. The first step in this approach makes use of the lidar observations to separate the backscatter and extinction contributions of the weakly depolarizing non-dust aerosol components from the contributions of the strongly depolarizing dust particles, under the assumption of an externally mixed two-component aerosol. In the second step, sun photometer retrievals of the fine and the coarse modes aerosol optical thickness (AOT) and volume concentration are used to calculate the associated concentrations from the extinction coefficients retrieved from the lidar. The estimated aerosol volume concentrations were converted into mass concentration with an assumption for the bulk aerosol density, and compared with the OPC measurements. The first results show agreement within the experimental uncertainty. This project received funding from the European Union's Seventh Framework Programme (FP7) project BACCHUS under grant agreement no. 603445, and the European Union's Horizon 2020 research and innovation programme ACTRIS-2 under grant agreement No 654109.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false What is the Demonstration Projects for the Integration... EDUCATION DEMONSTRATION PROJECTS FOR THE INTEGRATION OF VOCATIONAL AND ACADEMIC LEARNING PROGRAM General § 425.1 What is the Demonstration Projects for the Integration of Vocational and Academic Learning...
Toward an integrated quasi-operational air quality analysis and prediction system for South America
NASA Astrophysics Data System (ADS)
Hoshyaripour, Gholam Ali; Brasseur, Guy; Petersen, Katinka; Bouarar, Idiir; Andrade, Maria de Fatima
2015-04-01
Recent industrialization and urbanization in South America (SA) have notably exacerbated the air pollution with adverse impacts on human health and socio-economic systems. Consequently, there is a strong demand for developing ever-better assessment mechanisms to monitor the air quality at different temporal and spatial scales and minimize its damages. Based on previous achievements (e.g., MACC project in Europe and PANDA project in East Asia) we aim to design and implement an integrated system to monitor, analyze and forecast the air quality in SA along with its impacts upon public health and agriculture. An initiative will be established to combine observations (both satellite and in-situ) with advanced numerical models in order to provide a robust scientific basis for short- and long-term decision-making concerning air quality issues in SA countries. The main objectives of the project are defined as 3E: Enhancement of the air quality monitoring system through coupling models and observations, Elaboration of comprehensive indicators and assessment tools to support policy-making, Establishment of efficient information-exchange platforms to facilitate communication among scientists, authorities, stockholders and the public. Here we present the results of the initial stage, where a coarse resolution (50×50 km) set up of Weather Research and Forecast model with Chemistry (WRF-Chem) is used to simulate the air quality in SA considering anthropogenic, biomass-burning (based on MACCity, FINN inventories, respectively) and biogenic emissions (using MEGAN model). According to the availability of the observation data for Metropolitan Area of São Paulo, August 2012 is selected as the simulation period. Nested domains with higher resolution (15×15 km) are also embedded within the parent domain over the megacities (Sao Paolo and Rio de Janeiro in Brazil and Buenos Aires in Argentina), which account for the major anthropogenic emission sources located along coastal regions of the continent. Fire and biogenic emissions on the other hand mainly take place within the inner parts of the continent in for e.g. Amazon basin and sugarcane in Sao Paulo State. Contributions of these emission sources in reactive gases (e.g., CO, O3, NOx) and particulate matter concentrations are quantified. Next step is to examine different emission inventories and observation data to find an optimal description for the atmospheric composition in SA.
An AFM-SIMS Nano Tomography Acquisition System
NASA Astrophysics Data System (ADS)
Swinford, Richard William
An instrument, adding the capability to measure 3D volumetric chemical composition, has been constructed by me as a member of the Sanchez Nano Laboratory. The laboratory's in situ atomic force microscope (AFM) and secondary ion mass spectrometry systems (SIMS) are functional and integrated as one instrument. The SIMS utilizes a Ga focused ion beam (FIB) combined with a quadrupole mass analyzer. The AFM is comprised of a 6-axis stage, three coarse axes and three fine. The coarse stage is used for placing the AFM tip anywhere inside a (13x13x5 mm3) (xyz) volume. Thus the tip can be moved in and out of the FIB processing region with ease. The planned range for the Z-axis piezo was 60 microm, but was reduced after it was damaged from arc events. The repaired Z-axis piezo is now operated at a smaller nominal range of 18 microm (16.7 microm after pre-loading), still quite respectable for an AFM. The noise floor of the AFM is approximately 0.4 nm Rq. The voxel size for the combined instrument is targeted at 50 nm or larger. Thus 0.4 nm of xyz uncertainty is acceptable. The instrument has been used for analyzing samples using FIB beam currents of 250 pA and 5.75 nA. Coarse tip approaches can take a long time so an abbreviated technique is employed. Because of the relatively long thro of the Z piezo, the tip can be disengaged by deactivating the servo PID. Once disengaged, it can be moved laterally out of the way of the FIB-SIMS using the coarse stage. This instrument has been used to acquire volumetric data on AlTiC using AFM tip diameters of 18.9 nm and 30.6 nm. Acquisition times are very long, requiring multiple days to acquire a 50-image stack. New features to be added include auto stigmation, auto beam shift, more software automation, etc. Longer term upgrades to include a new lower voltage Z-piezo with strain-gauge feedback and a new design to extend the life for the coarse XY nano-positioners. This AFM-SIMS instrument, as constructed, has proven to be a great proof of concept vehicle. In the future it will be used to analyze micro fossils and it will also be used as a part of an intensive teaching curriculum.
Zhang, Yuexia; Yang, Zhenhua; Feng, Yan; Li, Ruijin; Zhang, Quanxi; Geng, Hong; Dong, Chuan
2015-08-01
The main aim of the present study was to examine in vitro responses of rat alveolar macrophages (AMs) exposed to coarse chalk dust particles (particulate matter in the size range 2.5-10 μm, PM(coarse)) by respiratory burst and oxidative stress. Chalk PM(coarse)-induced respiratory burst in AMs was measured by using a luminol-dependent chemiluminescence (CL) method. Also, the cell viability; lactate dehydrogenase (LDH) release; levels of cellular superoxide dismutase (SOD), catalase (CAT), glutathione (GSH), malondialdehyde (MDA), and acid phosphatase (ACP); plasma membrane ATPase; and extracellular nitric oxide (NO) level were determined 4 h following the treatment with the different dosages of chalk PM(coarse). The results showed that chalk PM(coarse) initiated the respiratory burst of AMs as indicated by strong CL, which was inhibited by diphenyleneiodonium chloride and L-N-nitro-L-arginine methyl ester hydrochloride. It suggested that chalk PM(coarse) induced the production of reactive oxygen species (ROS) and reactive nitrogen species (RNS) in AMs. This hypothesis was confirmed by the fact that chalk PM(coarse) resulted in a significant decrease of intracellular SOD, GSH, ACP, and ATPase levels and a notable increase of intracellular CAT, MDA content, and extracellular NO level, consequently leading to a decrease of the cell viability and a increase of LDH release. It was concluded that AMs exposed to chalk PM(coarse) can suffer from cytotoxicity which may be mediated by generation of excessive ROS/RNS. Graphical Abstract The possible mechanism of coarse chalk particles-induced adverse effects in AMs.
NASA Astrophysics Data System (ADS)
Li, Zhen-Lu
2018-03-01
The N-terminal amphiphilic helices of proteins Epsin, Sar1p, and Arf1 play a critical role in initiating membrane deformation. The interactions of these amphiphilic helices with the lipid membranes are investigated in this study by combining the all-atom and coarse-grained simulations. In the all-atom simulations, the amphiphilic helices of Epsin and Sar1p are found to have a shallower insertion depth into the membrane than the amphiphilic helix of Arf1, but remarkably, the amphiphilic helices of Epsin and Sar1p induce higher asymmetry in the lipid packing between the two monolayers of the membrane. The insertion depth of amphiphilic helix into the membrane is determined not only by the overall hydrophobicity but also by the specific distributions of polar and non-polar residues along the helix. To directly compare their ability to deform the membrane, the coarse-grained simulations are performed to investigate the membrane deformation under the insertion of multiple helices. Project supported by the National Natural Science Foundation of China (Grant Nos. 91427302 and 11474155).
Upscaling of Mixed Finite Element Discretization Problems by the Spectral AMGe Method
Kalchev, Delyan Z.; Lee, C. S.; Villa, U.; ...
2016-09-22
Here, we propose two multilevel spectral techniques for constructing coarse discretization spaces for saddle-point problems corresponding to PDEs involving a divergence constraint, with a focus on mixed finite element discretizations of scalar self-adjoint second order elliptic equations on general unstructured grids. We use element agglomeration algebraic multigrid (AMGe), which employs coarse elements that can have nonstandard shape since they are agglomerates of fine-grid elements. The coarse basis associated with each agglomerated coarse element is constructed by solving local eigenvalue problems and local mixed finite element problems. This construction leads to stable upscaled coarse spaces and guarantees the inf-sup compatibility ofmore » the upscaled discretization. Also, the approximation properties of these upscaled spaces improve by adding more local eigenfunctions to the coarse spaces. The higher accuracy comes at the cost of additional computational effort, as the sparsity of the resulting upscaled coarse discretization (referred to as operator complexity) deteriorates when we introduce additional functions in the coarse space. We also provide an efficient solver for the coarse (upscaled) saddle-point system by employing hybridization, which leads to a symmetric positive definite (s.p.d.) reduced system for the Lagrange multipliers, and to solve the latter s.p.d. system, we use our previously developed spectral AMGe solver. Numerical experiments, in both two and three dimensions, are provided to illustrate the efficiency of the proposed upscaling technique.« less
Upscaling of Mixed Finite Element Discretization Problems by the Spectral AMGe Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalchev, Delyan Z.; Lee, C. S.; Villa, U.
Here, we propose two multilevel spectral techniques for constructing coarse discretization spaces for saddle-point problems corresponding to PDEs involving a divergence constraint, with a focus on mixed finite element discretizations of scalar self-adjoint second order elliptic equations on general unstructured grids. We use element agglomeration algebraic multigrid (AMGe), which employs coarse elements that can have nonstandard shape since they are agglomerates of fine-grid elements. The coarse basis associated with each agglomerated coarse element is constructed by solving local eigenvalue problems and local mixed finite element problems. This construction leads to stable upscaled coarse spaces and guarantees the inf-sup compatibility ofmore » the upscaled discretization. Also, the approximation properties of these upscaled spaces improve by adding more local eigenfunctions to the coarse spaces. The higher accuracy comes at the cost of additional computational effort, as the sparsity of the resulting upscaled coarse discretization (referred to as operator complexity) deteriorates when we introduce additional functions in the coarse space. We also provide an efficient solver for the coarse (upscaled) saddle-point system by employing hybridization, which leads to a symmetric positive definite (s.p.d.) reduced system for the Lagrange multipliers, and to solve the latter s.p.d. system, we use our previously developed spectral AMGe solver. Numerical experiments, in both two and three dimensions, are provided to illustrate the efficiency of the proposed upscaling technique.« less
Quantifying the coarse-root biomass of intensively managed loblolly pine plantations
Ashley T. Miller; H. Lee Allen; Chris A. Maier
2006-01-01
Most of the carbon accumulation during a forest rotation is in plant biomass and the forest floor. Most of the belowground biomass in older loblolly pine (Pinus taeda L.) forests is in coarse roots, and coarse roots persist longer after harvest than aboveground biomass and fine roots. The main objective was to assess the carbon accumulation in coarse...
Quantifying the coarse-root biomass of intensively managed loblolly pine plantations
Ashley T. Miller; H. Lee Allen; Chris A. Maier
2006-01-01
Most of the carbon accumulation during a forest rotation is in plant biomass and the forest floor. Most of the belowground biomass in older loblolly pine (Pinus taeda L.) forests is in coarse roots, and coarse roots ersist longer after harvest than aboveground biomass and fine oots. The main objective was to assess the carbon accumulation in coarse...
Coarse-graining and self-dissimilarity of complex networks
NASA Astrophysics Data System (ADS)
Itzkovitz, Shalev; Levitt, Reuven; Kashtan, Nadav; Milo, Ron; Itzkovitz, Michael; Alon, Uri
2005-01-01
Can complex engineered and biological networks be coarse-grained into smaller and more understandable versions in which each node represents an entire pattern in the original network? To address this, we define coarse-graining units as connectivity patterns which can serve as the nodes of a coarse-grained network and present algorithms to detect them. We use this approach to systematically reverse-engineer electronic circuits, forming understandable high-level maps from incomprehensible transistor wiring: first, a coarse-grained version in which each node is a gate made of several transistors is established. Then the coarse-grained network is itself coarse-grained, resulting in a high-level blueprint in which each node is a circuit module made of many gates. We apply our approach also to a mammalian protein signal-transduction network, to find a simplified coarse-grained network with three main signaling channels that resemble multi-layered perceptrons made of cross-interacting MAP-kinase cascades. We find that both biological and electronic networks are “self-dissimilar,” with different network motifs at each level. The present approach may be used to simplify a variety of directed and nondirected, natural and designed networks.
Coarse-graining errors and numerical optimization using a relative entropy framework.
Chaimovich, Aviel; Shell, M Scott
2011-03-07
The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, S(rel), that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework. © 2011 American Institute of Physics.
STOCK: Structure mapper and online coarse-graining kit for molecular simulations
Bevc, Staš; Junghans, Christoph; Praprotnik, Matej
2015-03-15
We present a web toolkit STructure mapper and Online Coarse-graining Kit for setting up coarse-grained molecular simulations. The kit consists of two tools: structure mapping and Boltzmann inversion tools. The aim of the first tool is to define a molecular mapping from high, e.g. all-atom, to low, i.e. coarse-grained, resolution. Using a graphical user interface it generates input files, which are compatible with standard coarse-graining packages, e.g. VOTCA and DL_CGMAP. Our second tool generates effective potentials for coarse-grained simulations preserving the structural properties, e.g. radial distribution functions, of the underlying higher resolution model. The required distribution functions can be providedmore » by any simulation package. Simulations are performed on a local machine and only the distributions are uploaded to the server. The applicability of the toolkit is validated by mapping atomistic pentane and polyalanine molecules to a coarse-grained representation. Effective potentials are derived for systems of TIP3P (transferable intermolecular potential 3 point) water molecules and salt solution. The presented coarse-graining web toolkit is available at http://stock.cmm.ki.si.« less
Regulation of multispanning membrane protein topology via post-translational annealing.
Van Lehn, Reid C; Zhang, Bin; Miller, Thomas F
2015-09-26
The canonical mechanism for multispanning membrane protein topogenesis suggests that protein topology is established during cotranslational membrane integration. However, this mechanism is inconsistent with the behavior of EmrE, a dual-topology protein for which the mutation of positively charged loop residues, even close to the C-terminus, leads to dramatic shifts in its topology. We use coarse-grained simulations to investigate the Sec-facilitated membrane integration of EmrE and its mutants on realistic biological timescales. This work reveals a mechanism for regulating membrane-protein topogenesis, in which initially misintegrated configurations of the proteins undergo post-translational annealing to reach fully integrated multispanning topologies. The energetic barriers associated with this post-translational annealing process enforce kinetic pathways that dictate the topology of the fully integrated proteins. The proposed mechanism agrees well with the experimentally observed features of EmrE topogenesis and provides a range of experimentally testable predictions regarding the effect of translocon mutations on membrane protein topogenesis.
Predicting Flory-Huggins χ from Simulations
NASA Astrophysics Data System (ADS)
Zhang, Wenlin; Gomez, Enrique D.; Milner, Scott T.
2017-07-01
We introduce a method, based on a novel thermodynamic integration scheme, to extract the Flory-Huggins χ parameter as small as 10-3k T for polymer blends from molecular dynamics (MD) simulations. We obtain χ for the archetypical coarse-grained model of nonpolar polymer blends: flexible bead-spring chains with different Lennard-Jones interactions between A and B monomers. Using these χ values and a lattice version of self-consistent field theory (SCFT), we predict the shape of planar interfaces for phase-separated binary blends. Our SCFT results agree with MD simulations, validating both the predicted χ values and our thermodynamic integration method. Combined with atomistic simulations, our method can be applied to predict χ for new polymers from their chemical structures.
Source identification of coarse particles in the Desert ...
The Desert Southwest Coarse Particulate Matter Study was undertaken to further our understanding of the spatial and temporal variability and sources of fine and coarse particulate matter (PM) in rural, arid, desert environments. Sampling was conducted between February 2009 and February 2010 in Pinal County, AZ near the town of Casa Grande where PM concentrations routinely exceed the U.S. National Ambient Air Quality Standards (NAAQS) for both PM10 and PM2.5. In this desert region, exceedances of the PM10 NAAQS are dominated by high coarse particle concentrations, a common occurrence in this region of the United States. This work expands on previously published measurements of PM mass and chemistry by examining the sources of fine and coarse particles and the relative contribution of each to ambient PM mass concentrations using the Positive Matrix Factorization receptor model (Clements et al., 2014). Highlights • Isolation of coarse particles from fine particle sources. • Unique chemical composition of coarse particles. • Role of primary biological particles on aerosol loadings.
NASA Astrophysics Data System (ADS)
Arifi, Eva; Cahya, Evi Nur; Christin Remayanti, N.
2017-09-01
The performance of porous concrete made of recycled coarse aggregate was investigated. Fly ash was used as cement partial replacement. In this study, the strength of recycled aggregate was coMPared to low quality natural coarse aggregate which has high water absorption. Compression strength and tensile splitting strength test were conducted to evaluate the performance of porous concrete using fly ash as cement replacement. Results have shown that the utilization of recycled coarse aggregate up to 75% to replace low quality natural coarse aggregate with high water absorption increases compressive strength and splitting tensile strength of porous concrete. Using fly ash up to 25% as cement replacement improves compressive strength and splitting tensile strength of porous concrete.
Integrated, multi-scale, spatial-temporal cell biology--A next step in the post genomic era.
Horwitz, Rick
2016-03-01
New microscopic approaches, high-throughput imaging, and gene editing promise major new insights into cellular behaviors. When coupled with genomic and other 'omic information and "mined" for correlations and associations, a new breed of powerful and useful cellular models should emerge. These top down, coarse-grained, and statistical models, in turn, can be used to form hypotheses merging with fine-grained, bottom up mechanistic studies and models that are the back bone of cell biology. The goal of the Allen Institute for Cell Science is to develop the top down approach by developing a high throughput microscopy pipeline that is integrated with modeling, using gene edited hiPS cell lines in various physiological and pathological contexts. The output of these experiments and models will be an "animated" cell, capable of integrating and analyzing image data generated from experiments and models. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gulis, V.; Ferreira, V. J.; Graca, M. A.
2005-05-01
Traditional approaches to assess stream ecosystem health rely on structural parameters, e.g. a variety of biotic indices. The goal of the Europe-wide RivFunction project is to develop methodology that uses functional parameters (e.g. plant litter decomposition) to this end. Here we report on decomposition experiments carried out in Portugal in five pairs of streams that differed in dissolved inorganic nutrients. On average, decomposition rates of alder and oak leaves were 2.8 and 1.4 times higher in high nutrient streams in coarse and fine mesh bags, respectively, than in corresponding reference streams. Breakdown rate correlated better with stream water SRP concentration rather than TIN. Fungal biomass and sporulation rates of aquatic hyphomycetes associated with decomposing leaves were stimulated by higher nutrient levels. Both fungal parameters measured at very early stages of decomposition (e.g. days 7-13) correlated well with overall decomposition rates. Eutrophication had no significant effect on shredder abundances in leaf bags but species richness was higher in disturbed streams. Decomposition is a key functional parameter in streams integrating many other variables and can be useful in assessing stream ecosystem health. We also argue that because decomposition is often controlled by fungal activity, microbial parameters can also be useful in bioassessment.
Simulation study of entropy production in the one-dimensional Vlasov system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Zongliang, E-mail: liangliang1223@gmail.com; Wang, Shaojie
2016-07-15
The coarse-grain averaged distribution function of the one-dimensional Vlasov system is obtained by numerical simulation. The entropy productions in cases of the random field, the linear Landau damping, and the bump-on-tail instability are computed with the coarse-grain averaged distribution function. The computed entropy production is converged with increasing length of coarse-grain average. When the distribution function differs slightly from a Maxwellian distribution, the converged value agrees with the result computed by using the definition of thermodynamic entropy. The length of the coarse-grain average to compute the coarse-grain averaged distribution function is discussed.
Joint-operation in water resources project in Indonesia: Integrated or non-integrated
NASA Astrophysics Data System (ADS)
Ophiyandri, Taufika; Istijono, Bambang; Hidayat, Benny
2017-11-01
The construction of large water resources infrastructure project often involved a joint-operation (JO) project between two or more construction companies. The form of JO can be grouped into two categories - an integrated type and a non-integrated type. This paper investigates the reason of forming a JO project made by companies. The specific advantages and problems of JO project is also analysed in this paper. In order to achieve the objectives, three water resources infrastructure projects were selected as case studies. Data was gathered by conducting 11 semi-structured interviews to project owners, contractor managers, and project staffs. Data was analysed by means of content analysis. It was found that the most fundamental factor to form a JO is to win a competition or tender. An integrated model is in favour because it can reduce overhead costs and has a simple management system, while a non-integrated model is selected because it can avoid a sleeping partner and make contractor more responsible for their own job.
Molecular Simulation Studies of Covalently and Ionically Grafted Nanoparticles
NASA Astrophysics Data System (ADS)
Hong, Bingbing
Solvent-free covalently- or ionically-grafted nanoparticles (CGNs and IGNs) are a new class of organic-inorganic hybrid composite materials exhibiting fluid-like behaviors around room temperature. With similar structures to prior systems, e.g. nanocomposites, neutral or charged colloids, ionic liquids, etc, CGNs and IGNs inherit the functionality of inorganic nanopariticles, the facile processibility of polymers, as well as conductivity and nonvolatility from their constituent materials. In spite of the extensive prior experimental research having covered synthesis and measurements of thermal and dynamic properties, little progress in understanding of these new materials at the molecular level has been achieved, because of the lack of simulation work in this new area. Atomistic and coarse-grained molecular dynamics simulations have been performed in this thesis to investigate the thermodynamics, structure, and dynamics of these systems and to seek predictive methods predictable for their properties. Starting from poly(ethylene oxide) oligomers (PEO) melts, we established atomistic models based on united-atom representations of methylene. The Green-Kubo and Einstein-Helfand formulas were used to calculate the transport properties. The simulations generate densities, viscosities, diffusivities, in good agreement with experimental data. The chain-length dependence of the transport properties suggests that neither Rouse nor reptation models are applicable in the short-chain regime investigated. Coupled with thermodynamic integration methods, the models give good predictions of pressure-composition-density relations for CO 2 + PEO oligomers. Water effects on the Henry's constant of CO 2 in PEO have also been investigated. The dependence of the calculated Henry's constants on the weight percentage of water falls on a temperature-dependent master curve, irrespective of PEO chain length. CGNs are modeled by the inclusion of solid-sphere nanoparticles into the atomistic oligomers. The calculated viscosities from Green-Kubo relationships and temperature extrapolation are of the same order of magnitude as experimental values, but show a smaller activation energy relative to real CGNs systems. Grafted systems have higher viscosities, smaller diffusion coefficients, and slower chain dynamics than the ungrafted counterparts - nanocomposites - at high temperatures. At lower temperatures, grafted systems exhibit faster dynamics for both nanoparticles and chains relative to ungrafted systems, because of lower aggregation of nanoparticles and enhanced correlations between nanoparticles and chains. This agrees with the experimental observation that the new materials have liquid-like behavior in the absence of a solvent. To lower the simulated temperatures into the experimental range, we established a coarse-grained CGNs model by matching structural distribution functions to atomistic simulation data. In contrast with linear polymer systems, for which coarse-graining always accelerate dynamics, coarse-graining of grafted nanoparticles can either accelerate or slowdown the core motions, depending on the length of the grafted chains. This can be qualitatively predicted by a simple transition-state theory. Similar atomistic models to CGNs were developed for IGNs, with ammonium counterions described by an explicit-hydrogen way; these were in turn compared with "generic" coarse-grained IGNs. The elimination of chemical details in the coarse-grained models does not bring in qualitative changes to the radial distribution functions and diffusion of atomistic IGNs, but saves considerable simulation resources and make simulations near room temperatures affordable. The chain counterions in both atomistic and coarse-grained models are mobile, moving from site to site and from nanoparticle to nanoparticle. At the same temperature and the same core volume fractions, the nanoparticle diffusivities in coarse-grained IGNs are slower by a factor ten than the cores of CGNs. The coarse-grained IGNs models are later used to investigate the system dynamics through analysis of the dependence on temperature and structural parameters of the transport properties (self-diffusion coefficients, viscosities and conductivities). Further, migration kinetics of oligomeric counterions is analyzed in a manner analogous to unimer exchange between micellar aggregates. The counterion migrations follow the "double-core" mechanism and are kinetically controlled by neighboring-core collisions. (Abstract shortened by UMI.)
Lunar-Ultraviolet Telescope Experiment (LUTE) integrated program plan
NASA Technical Reports Server (NTRS)
Smith, Janice F. (Compiler); Forrest, Larry
1993-01-01
A detailed Lunar Ultraviolet Telescope Experiment (LUTE) program plan representing major decisions and tasks leading to those decisions for program execution are presented. The purpose of this task was to develop an integrated plan of project activities for the LUTE project, and to display the plan as an integrated network that shows the project activities, all critical interfaces, and schedules. The integrated network will provide the project manager with a frame work for strategic planning and risk management throughout the life of the project.
1985-07-01
soil salinity was high and acidity was low. Grasses established themselves and grew better than legumes. Wildlife response to vegetation establishment...Apalachicola Bay marsh project was developed on hydraulically pumped fine- and coarse-grained material placed in a saline intertidal environment in early 1976...is 10.5°C, the highest in Connecticut, which is due to a moderating influence by the Sound. The average tide is 0.75 m and the salinity ranges from
Future Climate Change Impact Assessment of River Flows at Two Watersheds of Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Ercan, A.; Ishida, K.; Kavvas, M. L.; Chen, Z. R.; Jang, S.; Amin, M. Z. M.; Shaaban, A. J.
2016-12-01
Impacts of climate change on the river flows under future climate change conditions were assessed over Muda and Dungun watersheds of Peninsular Malaysia by means of a coupled regional climate model and a physically-based hydrology model utilizing an ensemble of 15 different future climate realizations. Coarse resolution GCMs' future projections covering a wide range of emission scenarios were dynamically downscaled to 6 km resolution over the study area. Hydrologic simulations of the two selected watersheds were carried out at hillslope-scale and at hourly increments.
Archie's Saturation Exponent for Natural Gas Hydrate in Coarse-Grained Reservoirs
NASA Astrophysics Data System (ADS)
Cook, Ann E.; Waite, William F.
2018-03-01
Accurately quantifying the amount of naturally occurring gas hydrate in marine and permafrost environments is important for assessing its resource potential and understanding the role of gas hydrate in the global carbon cycle. Electrical resistivity well logs are often used to calculate gas hydrate saturations, Sh, using Archie's equation. Archie's equation, in turn, relies on an empirical saturation parameter, n. Though n = 1.9 has been measured for ice-bearing sands and is widely used within the hydrate community, it is highly questionable if this n value is appropriate for hydrate-bearing sands. In this work, we calibrate n for hydrate-bearing sands from the Canadian permafrost gas hydrate research well, Mallik 5L-38, by establishing an independent downhole Sh profile based on compressional-wave velocity log data. Using the independently determined Sh profile and colocated electrical resistivity and bulk density logs, Archie's saturation equation is solved for n, and uncertainty is tracked throughout the iterative process. In addition to the Mallik 5L-38 well, we also apply this method to two marine, coarse-grained reservoirs from the northern Gulf of Mexico Gas Hydrate Joint Industry Project: Walker Ridge 313-H and Green Canyon 955-H. All locations yield similar results, each suggesting n ≈ 2.5 ± 0.5. Thus, for the coarse-grained hydrate bearing (Sh > 0.4) of greatest interest as potential energy resources, we suggest that n = 2.5 ± 0.5 should be applied in Archie's equation for either marine or permafrost gas hydrate settings if independent estimates of n are not available.
NASA Astrophysics Data System (ADS)
Patricola, C. M.; Cook, K. H.
2008-12-01
As greenhouse warming continues there is growing concern about the future climate of both Africa, which is highlighted by the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4) as exceptionally vulnerable to climate change, and India. Precipitation projections from the AOGCMs of the IPCC AR4 are relatively consistent over India, but not over northern Africa. Inconsistencies can be related to the model's inability to capture climate process correctly, deficiencies in physical parameterizations, different SST projections, or horizontal atmospheric resolution that is too coarse to realistically represent the tight gradients over West Africa and complex topography of East Africa and India. Treatment of the land surface in a model may also be an issue over West Africa and India where land-surface/atmosphere interactions are very important. Here a method for simulating future climate is developed and applied using a high-resolution regional model in conjunction with output from a suite of AOGCMs, drawing on the advantages of both the regional and global modeling approaches. Integration by the regional model allows for finer horizontal resolution and regionally appropriate selection of parameterizations and land-surface model. AOGCM output is used to provide SST projections and lateral boundary conditions to constrain the regional model. The control simulation corresponds to 1981-2000, and eight future simulations representing 2081-2100 are conducted, each constrained by a different AOGCM and forced by CO2 concentrations from the SRES A2 emissions scenario. After model spin-up, May through October remain for investigation. Analysis is focused on climate change parameters important for impacts on agriculture and water resource management, and is presented in a format compatible with the IPCC reports. Precipitation projections simulated by the regional model are quite consistent, with 75% or more ensemble members agreeing on the sign of the anomaly over vast regions of Africa and India. Over West Africa, where the regional model provides the greatest improvement over the AOGCMs in consistency of ensemble members, precipitation at the end of the century is generally projected to increase during May and decrease in June and July. Wetter conditions are simulated during August though October, with the exception of drying close to the Guinean Coast in August. In late summer, high rainfall rates are simulated more frequently in the future, indicating the possibility for increases in flooding events. The regional model's projections over India are in stark contrast to the AOGCM's, producing intense and generally widespread drying in August and September. The very promising method developed here is young and further potential developments are recognized, including the addition of ocean, vegetation, and dust models. Ensembles which employ other regional models, sets of parameterizations, and emissions scenarios should also be explored.
Illusory conjunctions in simultanagnosia: coarse coding of visual feature location?
McCrea, Simon M; Buxbaum, Laurel J; Coslett, H Branch
2006-01-01
Simultanagnosia is a disorder characterized by an inability to see more than one object at a time. We report a simultanagnosic patient (ED) with bilateral posterior infarctions who produced frequent illusory conjunctions on tasks involving form and surface features (e.g., a red T) and form alone. ED also produced "blend" errors in which features of one familiar perceptual unit appeared to migrate to another familiar perceptual unit (e.g., "RO" read as "PQ"). ED often misread scrambled letter strings as a familiar word (e.g., "hmoe" read as "home"). Finally, ED's success in reporting two letters in an array was inversely related to the distance between the letters. These findings are consistent with the hypothesis that ED's illusory reflect coarse coding of visual feature location that is ameliorated in part by top-down information from object and word recognition systems; the findings are also consistent, however, with Treisman's Feature Integration Theory. Finally, the data provide additional support for the claim that the dorsal parieto-occipital cortex is implicated in the binding of visual feature information.
Application-specific coarse-grained reconfigurable array: architecture and design methodology
NASA Astrophysics Data System (ADS)
Zhou, Li; Liu, Dongpei; Zhang, Jianfeng; Liu, Hengzhu
2015-06-01
Coarse-grained reconfigurable arrays (CGRAs) have shown potential for application in embedded systems in recent years. Numerous reconfigurable processing elements (PEs) in CGRAs provide flexibility while maintaining high performance by exploring different levels of parallelism. However, a difference remains between the CGRA and the application-specific integrated circuit (ASIC). Some application domains, such as software-defined radios (SDRs), require flexibility with performance demand increases. More effective CGRA architectures are expected to be developed. Customisation of a CGRA according to its application can improve performance and efficiency. This study proposes an application-specific CGRA architecture template composed of generic PEs (GPEs) and special PEs (SPEs). The hardware of the SPE can be customised to accelerate specific computational patterns. An automatic design methodology that includes pattern identification and application-specific function unit generation is also presented. A mapping algorithm based on ant colony optimisation is provided. Experimental results on the SDR target domain show that compared with other ordinary and application-specific reconfigurable architectures, the CGRA generated by the proposed method performs more efficiently for given applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wehner, Michael; ., Prabhat; Reed, Kevin A.
The four idealized configurations of the U.S. CLIVAR Hurricane Working Group are integrated using the global Community Atmospheric Model version 5.1 at two different horizontal resolutions, approximately 100 and 25 km. The publicly released 0.9° × 1.3° configuration is a poor predictor of the sign of the 0.23° × 0.31° model configuration’s change in the total number of tropical storms in a warmer climate. However, it does predict the sign of the higher-resolution configuration’s change in the number of intense tropical cyclones in a warmer climate. In the 0.23° × 0.31° model configuration, both increased CO 2 concentrations and elevatedmore » sea surface temperature (SST) independently lower the number of weak tropical storms and shorten their average duration. Conversely, increased SST causes more intense tropical cyclones and lengthens their average duration, resulting in a greater number of intense tropical cyclone days globally. Increased SST also increased maximum tropical storm instantaneous precipitation rates across all storm intensities. It was found that while a measure of maximum potential intensity based on climatological mean quantities adequately predicts the 0.23° × 0.31° model’s forced response in its most intense simulated tropical cyclones, a related measure of cyclogenesis potential fails to predict the model’s actual cyclogenesis response to warmer SSTs. These analyses lead to two broader conclusions: 1) Projections of future tropical storm activity obtained by a direct tracking of tropical storms simulated by coarse-resolution climate models must be interpreted with caution. 2) Projections of future tropical cyclogenesis obtained from metrics of model behavior that are based solely on changes in long-term climatological fields and tuned to historical records must also be interpreted with caution.« less
Wehner, Michael; ., Prabhat; Reed, Kevin A.; ...
2015-05-12
The four idealized configurations of the U.S. CLIVAR Hurricane Working Group are integrated using the global Community Atmospheric Model version 5.1 at two different horizontal resolutions, approximately 100 and 25 km. The publicly released 0.9° × 1.3° configuration is a poor predictor of the sign of the 0.23° × 0.31° model configuration’s change in the total number of tropical storms in a warmer climate. However, it does predict the sign of the higher-resolution configuration’s change in the number of intense tropical cyclones in a warmer climate. In the 0.23° × 0.31° model configuration, both increased CO 2 concentrations and elevatedmore » sea surface temperature (SST) independently lower the number of weak tropical storms and shorten their average duration. Conversely, increased SST causes more intense tropical cyclones and lengthens their average duration, resulting in a greater number of intense tropical cyclone days globally. Increased SST also increased maximum tropical storm instantaneous precipitation rates across all storm intensities. It was found that while a measure of maximum potential intensity based on climatological mean quantities adequately predicts the 0.23° × 0.31° model’s forced response in its most intense simulated tropical cyclones, a related measure of cyclogenesis potential fails to predict the model’s actual cyclogenesis response to warmer SSTs. These analyses lead to two broader conclusions: 1) Projections of future tropical storm activity obtained by a direct tracking of tropical storms simulated by coarse-resolution climate models must be interpreted with caution. 2) Projections of future tropical cyclogenesis obtained from metrics of model behavior that are based solely on changes in long-term climatological fields and tuned to historical records must also be interpreted with caution.« less
Integrative Cardiac Health Project (ICHP)
2017-04-01
AWARD NUMBER: W81XWH-16-2-0007 TITLE: Integrative Cardiac Health Project (ICHP) PRINCIPAL INVESTIGATOR: COL (Ret) Marina N. Vernalis, MC, USA...2017 4. TITLE AND SUBTITLE Integrative Cardiac Health Project (ICHP) 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-16-2-0007 5c. PROGRAM ELEMENT...Unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT The Integrative Cardiac Health Project (ICHP) aims to lead the way in Cardiovascular Disease (CVD
ESIF Call for High-Impact Integrated Projects | Energy Systems Integration
Integrated Projects As a U.S. Department of Energy user facility, the Energy Systems Integration Facility concepts, tools, and technologies needed to measure, analyze, predict, protect, and control the grid of the Facility | NREL ESIF Call for High-Impact Integrated Projects ESIF Call for High-Impact
Wei, Dongshan; Wang, Feng
2010-08-28
The damped-short-range-interaction (DSRI) method is proposed to mimic coarse-grained simulations by propagating an atomistic scale system on a smoothed potential energy surface. The DSRI method has the benefit of enhanced sampling provided by a typical coarse-grained simulation without the need to perform coarse-graining. Our method was used to simulate liquid water, alanine dipeptide folding, and the self-assembly of dimyristoylphosphatidylcholine lipid. In each case, our method appreciably accelerated the dynamics without significantly changing the free energy surface. Additional insights from DSRI simulations and the promise of coupling our DSRI method with Hamiltonian replica-exchange molecular dynamics are discussed.
NASA Astrophysics Data System (ADS)
Wei, Dongshan; Wang, Feng
2010-08-01
The damped-short-range-interaction (DSRI) method is proposed to mimic coarse-grained simulations by propagating an atomistic scale system on a smoothed potential energy surface. The DSRI method has the benefit of enhanced sampling provided by a typical coarse-grained simulation without the need to perform coarse-graining. Our method was used to simulate liquid water, alanine dipeptide folding, and the self-assembly of dimyristoylphosphatidylcholine lipid. In each case, our method appreciably accelerated the dynamics without significantly changing the free energy surface. Additional insights from DSRI simulations and the promise of coupling our DSRI method with Hamiltonian replica-exchange molecular dynamics are discussed.
NASA Astrophysics Data System (ADS)
McComiskey, A. C.; Telg, H.; Sheridan, P. J.; Kassianov, E.
2017-12-01
The coarse mode contribution to the aerosol radiative effect in a range of clean and turbid aerosol regimes has not been well quantified. While the coarse-mode radiative effect in turbid conditions is generally assumed to be consequential, the effect in clean conditions has likely been underestimated. We survey ground-based in situ measurements of the coarse mode fraction of aerosol optical properties measured around the globe over the past 20 years by the DOE Atmospheric Radiation Measurement Facility and the NOAA Global Monitoring Division. The aerosol forcing efficiency is presented, allowing an evaluation of where the aerosol coarse mode might be climatologically significant.
Examining Thai high school students' developing STEM projects
NASA Astrophysics Data System (ADS)
Teenoi, Kultida; Siripun, Kulpatsorn; Yuenyong, Chokchai
2018-01-01
Like others, Thailand education strongly focused on STEM education. This paper aimed to examine existing Thai high school students' integrated knowledge about science, technology, engineering, and mathematics (STEM) in their developing science project. The participants included 49 high school students were studying the subject of individual study (IS) in Khon Kaen wittayayon school, Khon Kaen, Thailand. The IS was provided to gradually enhance students to know how to do science project starting from getting start to do science projects, They enrolled to study the individual study of science project for three year in roll. Methodology was qualitative research. Views of students' integrated knowledge about STEM were interpreted through participant observation, interview, and students' science projects. The first author as participant observation has taught this group of students for 3 years. It found that 16 science projects were developed. Views of students' integrated knowledge about STEM could be categorized into three categories. These included (1) completely indicated integration of knowledge about science, technology, engineering, and mathematics, (2) partial indicated integration of knowledge about science, technology, engineering, and mathematics, and (3) no integration. The findings revealed that majority of science projects could be categorized as completely indicated integration of knowledge about science, technology, engineering, and mathematics. The paper suggested some ideas of enhancing students to applying STEM for developing science projects.
Research a Novel Integrated and Dynamic Multi-object Trade-Off Mechanism in Software Project
NASA Astrophysics Data System (ADS)
Jiang, Weijin; Xu, Yuhui
Aiming at practical requirements of present software project management and control, the paper presented to construct integrated multi-object trade-off model based on software project process management, so as to actualize integrated and dynamic trade-oil of the multi-object system of project. Based on analyzing basic principle of dynamic controlling and integrated multi-object trade-off system process, the paper integrated method of cybernetics and network technology, through monitoring on some critical reference points according to the control objects, emphatically discussed the integrated and dynamic multi- object trade-off model and corresponding rules and mechanism in order to realize integration of process management and trade-off of multi-object system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grest, Gary S.
2017-09-01
Coupled length and time scales determine the dynamic behavior of polymers and polymer nanocomposites and underlie their unique properties. To resolve the properties over large time and length scales it is imperative to develop coarse grained models which retain the atomistic specificity. Here we probe the degree of coarse graining required to simultaneously retain significant atomistic details a nd access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using polyethylene as a model system, we probe how the coarse - graining scale affects themore » measured dynamics with different number methylene group s per coarse - grained beads. Using these models we simulate polyethylene melts for times over 500 ms to study the viscoelastic properties of well - entangled polymer melts and large nanoparticle assembly as the nanoparticles are driven close enough to form nanostructures.« less
Reengineering outcomes management: an integrated approach to managing data, systems, and processes.
Neuman, K; Malloch, K; Ruetten, V
1999-01-01
The integration of outcomes management into organizational reengineering projects is often overlooked or marginalized in proportion to the entire project. Incorporation of an integrated outcomes management program strengthens the overall quality of reengineering projects and enhances their sustainability. This article presents a case study in which data, systems, and processes were reengineered to form an effective Outcomes Management program as a component of the organization's overall project. The authors describe eight steps to develop and monitor an integrated outcomes management program. An example of an integrated report format is included.
NASA Astrophysics Data System (ADS)
Castro, C. L.; Dominguez, F.; Chang, H.
2010-12-01
Current seasonal climate forecasts and climate change projections of the North American monsoon are based on the use of course-scale information from a general circulation model. The global models, however, have substantial difficulty in resolving the regional scale forcing mechanisms of precipitation. This is especially true during the period of the North American Monsoon in the warm season. Precipitation is driven primarily due to the diurnal cycle of convection, and this process cannot be resolve in coarse-resolution global models that have a relatively poor representation of terrain. Though statistical downscaling may offer a relatively expedient method to generate information more appropriate for the regional scale, and is already being used in the resource decision making processes in the Southwest U.S., its main drawback is that it cannot account for a non-stationary climate. Here we demonstrate the use of a regional climate model, specifically the Weather Research and Forecast (WRF) model, for dynamical downscaling of the North American Monsoon. To drive the WRF simulations, we use retrospective reforecasts from the Climate Forecast System (CFS) model, the operational model used at the U.S. National Center for Environmental Prediction, and three select “well performing” IPCC AR 4 models for the A2 emission scenario. Though relatively computationally expensive, the use of WRF as a regional climate model in this way adds substantial value in the representation of the North American Monsoon. In both cases, the regional climate model captures a fairly realistic and reasonable monsoon, where none exists in the driving global model, and captures the dominant modes of precipitation anomalies associated with ENSO and the Pacific Decadal Oscillation (PDO). Long-term precipitation variability and trends in these simulations is considered via the standardized precipitation index (SPI), a commonly used metric to characterize long-term drought. Dynamically downscaled climate projection data will be integrated into future water resource projections in the state of Arizona, through a cooperative effort involving numerous water resource stakeholders.
NASA Astrophysics Data System (ADS)
Chen, Fu-Lin; Williams, Ronald; Svendsen, Erik; Yeatts, Karin; Creason, John; Scott, James; Terrell, Dock; Case, Martin
Coarse particulate matter (PM 10) concentration data from residential outdoor sites were collected using portable samplers as part of an exposure assessment for the North Carolina Asthma and Children's Environment Studies (NC-ACES). PM 10 values were estimated using the differential between independent PM 10 and PM 2.5 collocated MiniVol measurements. Repeated daily 24-h integrated PM 10 and PM 2.5 residential outdoor monitoring was performed at a total of 26 homes during September 2003-June 2004 in the Research Triangle Park, NC area. This effort resulted in the collection of 73 total daily measurements. This assessment was conducted to provide data needed to investigate the association of exposures to coarse particle PM mass concentrations with observed human health effects. Potential instrument bias between the differential MiniVol methodology and a dichotomous sampler were investigated. Results indicated that minimal bias of PM 10 mass concentration estimates (slope = 0.8, intercept =0.36μg m -3) existed between the dichotomous and differential MiniVol procedures. Residential outdoor PM 10 mass concentrations were observed to be highly variable across measurement days and ranged from 1.1 to 12.6μg m -3 (mean of 5.4μg m -3). An average correlation coefficient of r=0.75 existed between residential outdoor PM 10 mass concentrations and those obtained from the central ambient monitoring site. Temporal and spatial variability of PM 10 mass concentrations during the study were observed and are described in this report.
MAHLI at the Rocknest sand shadow: Science and science-enabling activities
NASA Astrophysics Data System (ADS)
Minitti, M. E.; Kah, L. C.; Yingst, R. A.; Edgett, K. S.; Anderson, R. C.; Beegle, L. W.; Carsten, J. L.; Deen, R. G.; Goetz, W.; Hardgrove, C.; Harker, D. E.; Herkenhoff, K. E.; Hurowitz, J. A.; Jandura, L.; Kennedy, M. R.; Kocurek, G.; Krezoski, G. M.; Kuhn, S. R.; Limonadi, D.; Lipkaman, L.; Madsen, M. B.; Olson, T. S.; Robinson, M. L.; Rowland, S. K.; Rubin, D. M.; Seybold, C.; Schieber, J.; Schmidt, M.; Sumner, D. Y.; Tompkins, V. V.; Van Beek, J. K.; Van Beek, T.
2013-11-01
Martian solar days 57-100, the Mars Science Laboratory Curiosity rover acquired and processed a solid (sediment) sample and analyzed its mineralogy and geochemistry with the Chemistry and Mineralogy and Sample Analysis at Mars instruments. An aeolian deposit—herein referred to as the Rocknest sand shadow—was inferred to represent a global average soil composition and selected for study to facilitate integration of analytical results with observations from earlier missions. During first-time activities, the Mars Hand Lens Imager (MAHLI) was used to support both science and engineering activities related to sample assessment, collection, and delivery. Here we report on MAHLI activities that directly supported sample analysis and provide MAHLI observations regarding the grain-scale characteristics of the Rocknest sand shadow. MAHLI imaging confirms that the Rocknest sand shadow is one of a family of bimodal aeolian accumulations on Mars—similar to the coarse-grained ripples interrogated by the Mars Exploration Rovers Spirit and Opportunity—in which a surface veneer of coarse-grained sediment stabilizes predominantly fine-grained sediment of the deposit interior. The similarity in grain size distribution of these geographically disparate deposits support the widespread occurrence of bimodal aeolian transport on Mars. We suggest that preservation of bimodal aeolian deposits may be characteristic of regions of active deflation, where winnowing of the fine-sediment fraction results in a relatively low sediment load and a preferential increase in the coarse-grained fraction of the sediment load. The compositional similarity of Martian aeolian deposits supports the potential for global redistribution of fine-grained components, combined with potential local contributions.
Xu, Yinlin; Ma, Qianli D Y; Schmitt, Daniel T; Bernaola-Galván, Pedro; Ivanov, Plamen Ch
2011-11-01
We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ < 1, while for Δ > 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences.
Xu, Yinlin; Ma, Qianli D.Y.; Schmitt, Daniel T.; Bernaola-Galván, Pedro; Ivanov, Plamen Ch.
2014-01-01
We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ < 1, while for Δ > 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences. PMID:25392599
Improvements in agricultural water decision support using remote sensing
NASA Astrophysics Data System (ADS)
Marshall, M. T.
2012-12-01
Population driven water scarcity, aggravated by climate-driven evaporative demand in dry regions of the world, has the potential of transforming ecological and social systems to the point of armed conflict. Water shortages will be most severe in agricultural areas, as the priority shifts to urban and industrial use. In order to design, evaluate, and monitor appropriate mitigation strategies, predictive models must be developed that quantify exposure to water shortage. Remote sensing data has been used for more than three decades now to parametrize these models, because field measurements are costly and difficult in remote regions of the world. In the past decade, decision-makers for the first time can make accurate and near real-time evaluations of field conditions with the advent of hyper- spatial and spectral and coarse resolution continuous remote sensing data. Here, we summarize two projects representing diverse applications of remote sensing to improve agricultural water decision support. The first project employs MODIS (coarse resolution continuous data) to drive an evapotranspiration index, which is combined with the Standardized Precipitation Index driven by meteorological satellite data to improve famine early warning in Africa. The combined index is evaluated using district-level crop yield data from Kenya and Malawi and national-level crop yield data from the United Nations Food and Agriculture Organization. The second project utilizes hyper- spatial (GeoEye 1, Quickbird, IKONOS, and RapidEye) and spectral (Hyperion/ALI), as well as multi-spectral (Landsat ETM+, SPOT, and MODIS) data to develop biomass estimates for key crops (alfalfa, corn, cotton, and rice) in the Central Valley of California. Crop biomass is an important indicator of crop water productivity. The remote sensing data is combined using various data fusion techniques and evaluated with field data collected in the summer of 2012. We conclude with a brief discussion on implementation of these tools into two new decision support systems: FEWSNET Early Warning Explorer (http://earlywarning.usgs.gov/fews/ewxindex.php) and the NASA Terrestrial Observation and Prediction System (http://ecocast.arc.nasa.gov/) for the first and second project respectively.
NASA Astrophysics Data System (ADS)
Robles-Morua, A.; Vivoni, E. R.; Volo, T. J.; Rivera, E. R.; Dominguez, F.; Meixner, T.
2011-12-01
This project is part of a multidisciplinary effort aimed at understanding the impacts of climate variability and change on the ecological services provided by riparian ecosystems in semiarid watersheds of the southwestern United States. Valuing the environmental and recreational services provided by these ecosystems in the future requires a numerical simulation approach to estimate streamflow in ungauged tributaries as well as diffuse and direct recharge to groundwater basins. In this work, we utilize a distributed hydrologic model known as the TIN-based Real-time Integrated Basin Simulator (tRIBS) in the upper Santa Cruz and San Pedro basins with the goal of generating simulated hydrological fields that will be coupled to a riparian groundwater model. With the distributed model, we will evaluate a set of climate change and population scenarios to quantify future conditions in these two river systems and their impacts on flood peaks, recharge events and low flows. Here, we present a model confidence building exercise based on high performance computing (HPC) runs of the tRIBS model in both basins during the period of 1990-2000. Distributed model simulations utilize best-available data across the US-Mexico border on topography, land cover and soils obtained from analysis of remotely-sensed imagery and government databases. Meteorological forcing over the historical period is obtained from a combination of sparse ground networks and weather radar rainfall estimates. We then focus on a comparison between simulation runs using ground-based forcing to cases where the Weather Research Forecast (WRF) model is used to specify the historical conditions. Two spatial resolutions are considered from the WRF model fields - a coarse (35-km) and a downscaled (10- km) forcing. Comparisons will focus on the distribution of precipitation, soil moisture, runoff generation and recharge and assess the value of the WRF coarse and downscaled products. These results provide confidence in the model application and a measure of modeling uncertainty that will help set the foundation for forthcoming climate change studies.
NASA Astrophysics Data System (ADS)
Vogel, J. G.; Bacon, A. R.; Bracho, R. G.; Gonzalez-Benecke, C. A.; Fox, T. D.; Laviner, M. A.; Kane, M.; Burkhart, H.; Martin, T.; Will, R.; Ross, C. W.; Grunwald, S.; Jokela, E. J.; Meek, C.
2016-12-01
Extending from Virginia to east Texas in the southeastern United States, managed pine plantations are an important component of the region's carbon cycle. An objective of the Pine Integrated Network: Education, Mitigation, and Adaptation project (PINEMAP) is to improve estimates of how ecosystem carbon pools respond to the management strategies used to increase the growth of loblolly pine plantations. Experimental studies (108 total) that have been used to understand plantation productivity and stand dynamics by university-forest industry cooperatives were measured for the carbon stored in the trees, roots, coarse-wood, detritus in soil, forest floor, understory and soils to 1-meter. The age of the studied plantations ranged from 4-26 years at the time of sampling, with 26 years very near the period when these plantations are commonly harvested. Across all study sites, 455 experimental plots were measured. The average C storage across all pools, sites, and treatments was 192 Mg C ha-1, with the average percentage of the total coming from soil (44%), tree biomass (40%), forest floor (8%), root (5%), soil detritus (2%), understory biomass (1%), and coarse-wood (<1%) pools. Plots had as a treatment either fertilization, competition control, and stand density control (thinning), and every possible combination of treatments including `no treatment'. A paired plot analysis was used where two plots at a site were examined for relative differences caused by a single treatment and these differences averaged across the region. Thinning as a stand-alone treatment significantly reduced forest floor mass by 60%, and the forest floor in the thinned plus either competition control or fertilization was 18.9% and 19.2% less, respectively, than unthinned stands combined with the same treatments. Competition control increased C storage in tree biomass by 12% and thinning decreased tree biomass by 32%. Thinning combined with fertilization had lower soil carbon (0-1 m) than unthinned-fertilized plots (22%), although the replication for this combination was relatively low (n=6). Overall these results suggest that maintaining higher tree densities increases ecosystem carbon storage across multiple pools of C in loblolly pine plantations.
NASA Astrophysics Data System (ADS)
Li, Cheng; Borken-Kleefeld, Jens; Zheng, Junyu; Yuan, Zibing; Ou, Jiamin; Li, Yue; Wang, Yanlong; Xu, Yuanqian
2018-05-01
Ship emissions contribute significantly to air pollution and pose health risks to residents of coastal areas in China, but the current research remains incomplete and coarse due to data availability and inaccuracy in estimation methods. In this study, an integrated approach based on the Automatic Identification System (AIS) was developed to address this problem. This approach utilized detailed information from AIS and cargo turnover and the vessel calling number information and is thereby capable of quantifying sectoral contributions by fuel types and emissions from ports, rivers, coastal traffic and over-the-horizon ship traffic. Based upon the established methodology, ship emissions in China from 2004 to 2013 were estimated, and those to 2040 at 5-year intervals under different control scenarios were projected. Results showed that for the area within 200 nautical miles (Nm) of the Chinese coast, SO2, NOx, CO, PM10, PM2.5, hydrocarbon (HC), black carbon (BC) and organic carbon (OC) emissions in 2013 were 1010, 1443, 118, 107, 87, 67, 29 and 21 kt yr-1, respectively, which doubled over these 10 years. Ship sources contributed ˜ 10 % to the total SO2 and NOx emissions in the coastal provinces of China. Emissions from the proposed Domestic Emission Control Areas (DECAs) within 12 Nm constituted approximately 40 % of the all ship emissions along the Chinese coast, and this percentage would double when the DECA boundary is extended to 100 Nm. Ship emissions in ports accounted for about one-quarter of the total emissions within 200 Nm, within which nearly 80 % of the emissions were concentrated in the top 10 busiest ports of China. SO2 emissions could be reduced by 80 % in 2020 under a 0.5 % global sulfur cap policy. In comparison, a similar reduction of NOx emissions would require significant technological change and would likely take several decades. This study provides solid scientific support for ship emissions control policy making in China. It is suggested to investigate and monitor the emissions from the shipping sector in more detail in the future.
Tress, Bärbel; Tress, Gunther; Fry, Gary
2009-07-01
The growing demand for integrative (interdisciplinary or transdisciplinary) approaches in the field of environmental and landscape change has increased the number of PhD students working in this area. Yet, the motivations to join integrative projects and the challenges for PhD students have so far not been investigated. The aims of this paper were to identify the understanding of PhD students with regard to integrative research, their motivations to join integrative projects, their expectations in terms of integration and results, and to reveal the challenges they face in integrative projects. We collected data by a questionnaire survey of 104 PhD students attending five PhD Master Classes held from 2003 to 2006. We used manual content analysis to analyse the free-text answers. The results revealed that students lack a differentiated understanding of integrative approaches. The main motivations to join integrative projects were the dissertation subject, the practical relevance of the project, the intellectual stimulation of working with different disciplines, and the belief that integrative research is more innovative. Expectations in terms of integration were high. Core challenges for integration included intellectual and external challenges such as lack of knowledge of other disciplines, knowledge transfer, reaching depth, supervision, lack of exchange with other students and time demands. To improve the situation for PhD students, we suggest improving knowledge on integrative approaches, balancing practical applicability with theoretical advancement, providing formal introductions to other fields of research, and enhancing institutional support for integrative PhD projects.
Evaluation of various coarse aggregate concretes : final report.
DOT National Transportation Integrated Search
1983-10-01
This study was initiated to determine the properties of concrete using three types of coarse aggregate. The coarse aggregates evaluated in this study included silicious gravel, the standard aggregate for concrete in the state, with sandstone and lime...
2015-06-01
very coarse architectural model proposed in Section 2.4 into something that might be implemented . Figure 11 shows the model we have created based ...interoperability through common data models . So many of the pieces are either in place or are being developed currently. However, SEA still needs: • A core...of knowledge derived through the scientific method. In NATO, S&T is addressed using different business models , namely a collaborative business model
Analysis of DNA Sequences by an Optical Time-Integrating Correlator: Proof-of-Concept Experiments.
1992-05-01
DNA ANALYSIS STRATEGY 4 2.1 Representation of DNA Bases 4 2.2 DNA Analysis Strategy 6 3.0 CUSTOM GENERATORS FOR DNA SEQUENCES 10 3.1 Hardware Design 10...of the DNA bases where each base is represented by a 7-bits long pseudorandom sequence. 5 Figure 4: Coarse analysis of a DNA sequence. 7 Figure 5: Fine...a 20-bases long database. 32 xiii LIST OF TABLES PAGE Table 1: Short representations of the DNA bases where each base is represented by 7-bits long
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trément, Sébastien; Rousseau, Bernard, E-mail: bernard.rousseau@u-psud.fr; Schnell, Benoît
2014-04-07
We apply operational procedures available in the literature to the construction of coarse-grained conservative and friction forces for use in dissipative particle dynamics (DPD) simulations. The full procedure rely on a bottom-up approach: large molecular dynamics trajectories of n-pentane and n-decane modeled with an anisotropic united atom model serve as input for the force field generation. As a consequence, the coarse-grained model is expected to reproduce at least semi-quantitatively structural and dynamical properties of the underlying atomistic model. Two different coarse-graining levels are studied, corresponding to five and ten carbon atoms per DPD bead. The influence of the coarse-graining levelmore » on the generated force fields contributions, namely, the conservative and the friction part, is discussed. It is shown that the coarse-grained model of n-pentane correctly reproduces self-diffusion and viscosity coefficients of real n-pentane, while the fully coarse-grained model for n-decane at ambient temperature over-predicts diffusion by a factor of 2. However, when the n-pentane coarse-grained model is used as a building block for larger molecule (e.g., n-decane as a two blobs model), a much better agreement with experimental data is obtained, suggesting that the force field constructed is transferable to large macro-molecular systems.« less
Proceedings of the 13th Project integration meeting
NASA Technical Reports Server (NTRS)
Mcdonald, R. R.
1979-01-01
Progress made by the Low Cost Solar Array Project during the period April through August 1979 is presented. Reports are given on project analysis and integration; technology development in silicon material, large area sheet silicon, and encapsulation; production process and equipment development; engineering and operations, and a discussion of the steps taken to integrate these efforts. A report on, and copies of viewgraphs presented at the Project Integration Meeting held August 22-23, 1979 are presented.
Lin, Pei-Feng; Lo, Men-Tzung; Tsao, Jenho; Chang, Yi-Chung; Lin, Chen; Ho, Yi-Lwun
2014-01-01
The heart begins to beat before the brain is formed. Whether conventional hierarchical central commands sent by the brain to the heart alone explain all the interplay between these two organs should be reconsidered. Here, we demonstrate correlations between the signal complexity of brain and cardiac activity. Eighty-seven geriatric outpatients with healthy hearts and varied cognitive abilities each provided a 24-hour electrocardiography (ECG) and a 19-channel eye-closed routine electroencephalography (EEG). Multiscale entropy (MSE) analysis was applied to three epochs (resting-awake state, photic stimulation of fast frequencies (fast-PS), and photic stimulation of slow frequencies (slow-PS)) of EEG in the 1–58 Hz frequency range, and three RR interval (RRI) time series (awake-state, sleep and that concomitant with the EEG) for each subject. The low-to-high frequency power (LF/HF) ratio of RRI was calculated to represent sympatho-vagal balance. With statistics after Bonferroni corrections, we found that: (a) the summed MSE value on coarse scales of the awake RRI (scales 11–20, RRI-MSE-coarse) were inversely correlated with the summed MSE value on coarse scales of the resting-awake EEG (scales 6–20, EEG-MSE-coarse) at Fp2, C4, T6 and T4; (b) the awake RRI-MSE-coarse was inversely correlated with the fast-PS EEG-MSE-coarse at O1, O2 and C4; (c) the sleep RRI-MSE-coarse was inversely correlated with the slow-PS EEG-MSE-coarse at Fp2; (d) the RRI-MSE-coarse and LF/HF ratio of the awake RRI were correlated positively to each other; (e) the EEG-MSE-coarse at F8 was proportional to the cognitive test score; (f) the results conform to the cholinergic hypothesis which states that cognitive impairment causes reduction in vagal cardiac modulation; (g) fast-PS significantly lowered the EEG-MSE-coarse globally. Whether these heart-brain correlations could be fully explained by the central autonomic network is unknown and needs further exploration. PMID:24498375
Fully automatic hp-adaptivity for acoustic and electromagnetic scattering in three dimensions
NASA Astrophysics Data System (ADS)
Kurtz, Jason Patrick
We present an algorithm for fully automatic hp-adaptivity for finite element approximations of elliptic and Maxwell boundary value problems in three dimensions. The algorithm automatically generates a sequence of coarse grids, and a corresponding sequence of fine grids, such that the energy norm of the error decreases exponentially with respect to the number of degrees of freedom in either sequence. At each step, we employ a discrete optimization algorithm to determine the refinements for the current coarse grid such that the projection-based interpolation error for the current fine grid solution decreases with an optimal rate with respect to the number of degrees of freedom added by the refinement. The refinements are restricted only by the requirement that the resulting mesh is at most 1-irregular, but they may be anisotropic in both element size h and order of approximation p. While we cannot prove that our method converges at all, we present numerical evidence of exponential convergence for a diverse suite of model problems from acoustic and electromagnetic scattering. In particular we show that our method is well suited to the automatic resolution of exterior problems truncated by the introduction of a perfectly matched layer. To enable and accelerate the solution of these problems on commodity hardware, we include a detailed account of three critical aspects of our implementation, namely an efficient implementation of sum factorization, several efficient interfaces to the direct multi-frontal solver MUMPS, and some fast direct solvers for the computation of a sequence of nested projections.
ERIC Educational Resources Information Center
Hirotani, Maki; Fujii, Kiyomi
2015-01-01
Many studies on intercultural communication introduced how their collaborative projects were conducted. There are also several studies that discuss how intercultural collaborative activities can be integrated into a foreign language curriculum, as well as a big project (the INTENT project) that helps teachers integrate collaborative activities…
ERIC Educational Resources Information Center
Carlisle, Katie
2011-01-01
The author reports on the formation of a performing arts-focused curriculum integration project, in which key components of curriculum integration were employed within a project-focus involving the performing arts of music, theater, and dance. The project occurred within a curricular community partnership between a public school and nearby…
Effects of coarse aggregate on the physical properties of Florida concrete mixes.
DOT National Transportation Integrated Search
2015-10-01
Portland cement concrete is a heterogeneous, composite material composed of coarse and fine granular material : embedded in a matrix of hardened paste. The coarse material is aggregate, which is primarily used as inexpensive filler : and comprises th...
An Integrated Framework for Parameter-based Optimization of Scientific Workflows.
Kumar, Vijay S; Sadayappan, P; Mehta, Gaurang; Vahi, Karan; Deelman, Ewa; Ratnakar, Varun; Kim, Jihie; Gil, Yolanda; Hall, Mary; Kurc, Tahsin; Saltz, Joel
2009-01-01
Data analysis processes in scientific applications can be expressed as coarse-grain workflows of complex data processing operations with data flow dependencies between them. Performance optimization of these workflows can be viewed as a search for a set of optimal values in a multi-dimensional parameter space. While some performance parameters such as grouping of workflow components and their mapping to machines do not a ect the accuracy of the output, others may dictate trading the output quality of individual components (and of the whole workflow) for performance. This paper describes an integrated framework which is capable of supporting performance optimizations along multiple dimensions of the parameter space. Using two real-world applications in the spatial data analysis domain, we present an experimental evaluation of the proposed framework.
Parareal algorithms with local time-integrators for time fractional differential equations
NASA Astrophysics Data System (ADS)
Wu, Shu-Lin; Zhou, Tao
2018-04-01
It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.
Integrating Metal-Oxide-Decorated CNT Networks with a CMOS Readout in a Gas Sensor
Lee, Hyunjoong; Lee, Sanghoon; Kim, Dai-Hong; Perello, David; Park, Young June; Hong, Seong-Hyeon; Yun, Minhee; Kim, Suhwan
2012-01-01
We have implemented a tin-oxide-decorated carbon nanotube (CNT) network gas sensor system on a single die. We have also demonstrated the deposition of metallic tin on the CNT network, its subsequent oxidation in air, and the improvement of the lifetime of the sensors. The fabricated array of CNT sensors contains 128 sensor cells for added redundancy and increased accuracy. The read-out integrated circuit (ROIC) was combined with coarse and fine time-to-digital converters to extend its resolution in a power-efficient way. The ROIC is fabricated using a 0.35 μm CMOS process, and the whole sensor system consumes 30 mA at 5 V. The sensor system was successfully tested in the detection of ammonia gas at elevated temperatures. PMID:22736966
NASA Technical Reports Server (NTRS)
Thomas, Leann; Utley, Dawn
2006-01-01
While there has been extensive research in defining project organizational structures for traditional projects, little research exists to support high technology government project s organizational structure definition. High-Technology Government projects differ from traditional projects in that they are non-profit, span across Government-Industry organizations, typically require significant integration effort, and are strongly susceptible to a volatile external environment. Systems Integration implementation has been identified as a major contributor to both project success and failure. The literature research bridges program management organizational planning, systems integration, organizational theory, and independent project reports, in order to assess Systems Integration (SI) organizational structure selection for improving the high-technology government project s probability of success. This paper will describe the methodology used to 1) Identify and assess SI organizational structures and their success rate, and 2) Identify key factors to be used in the selection of these SI organizational structures during the acquisition strategy process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fix, N. J.
The scope of the Fluor Hanford, Inc. Groundwater and Technical Integration Support (Master Project) is to provide technical and integration support to Fluor Hanford, Inc., including operable unit investigations at 300-FF-5 and other groundwater operable units, strategic integration, technical integration and assessments, remediation decision support, and science and technology. This Quality Assurance Management Plan provides the quality assurance requirements and processes that will be followed by the Fluor Hanford, Inc. Groundwater and Technical Integration Support (Master Project).
Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.
Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E
2017-07-01
We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.
Mechanical response of two polyimides through coarse-grained molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Sudarkodi, V.; Sooraj, K.; Nair, Nisanth N.; Basu, Sumit; Parandekar, Priya V.; Sinha, Nishant K.; Prakash, Om; Tsotsis, Tom
2018-03-01
Coarse-grained molecular dynamics (MD) simulations allow us to predict the mechanical responses of polymers, starting merely with a description of their molecular architectures. It is interesting to ask whether, given two competing molecular architectures, coarse-grained MD simulations can predict the differences that can be expected in their mechanical responses. We have studied two crosslinked polyimides PMR15 and HFPE52—both used in high- temperature applications—to assess whether the subtle differences in their uniaxial stress-strain responses, revealed by experiments, can be reproduced by carefully coarse-grained MD models. The coarse graining procedure for PMR15 is outlined in this work, while the coarse grain forcefields for HFPE52 are borrowed from an earlier one (Pandiyan et al 2015 Macromol. Theory Simul. 24 513-20). We show that the stress-strain responses of both these polyimides are qualitatively reproduced, and important insights into their deformation and failure mechanisms are obtained. More importantly, the differences in the molecular architecture between the polyimides carry over to the differences in the stress-strain responses in a manner that parallels the experimental results. A critical assessment of the successes and shortcomings of predicting mechanical responses through coarse-grained MD simulations has been made.
Project management plan : Dallas Integrated Corridor Management (ICM) demonstration project.
DOT National Transportation Integrated Search
2010-12-01
The Dallas Integrated Corridor Management System Demonstration Project is a multi-agency, de-centralized operation which will utilize a set of regional systems to integrate the operations of the corridor. The purpose of the Dallas ICM System is to im...
The 19th Project Integration Meeting
NASA Technical Reports Server (NTRS)
Mcdonald, R. R.
1981-01-01
The Flat-Plate Solar Array Project is described. Project analysis and integration is discussed. Technology research in silicon material, large-area silicon sheet and environmental isolation; cell and module formation; engineering sciences, and module performance and failure analysis. It includes a report on, and copies of visual presentations made at, the 19th Project Integration Meeting held at Pasadena, California, on November 11, 1981.
Attaining and maintaining data integrity with configuration management
NASA Astrophysics Data System (ADS)
Huffman, Dorothy J.; Jeane, Shirley A.
1993-08-01
Managers and scientists are concerned about data integrity because they draw conclusions from data that can have far reaching effects. Projects managers use Configuration Management to insure that hardware, software, and project information are controlled. They have not, as yet, applied its rigorously to data. However, there is ample opportunity in the data collection and production process to jeopardize data integrity. Environmental changes, tampering and production problems can all affect data integrity. There are four functions included in the Configuration Management process: configuration identification, control, auditing and status accounting. These functions provide management the means to attain data integrity and the visibility into engineering processes needed to maintain data integrity. When project managers apply Configuration Management processes to data, the data user can trace back through history to validate data integrity. The user knows that the project allowed only orderly changes to the data. He is assured that project personnel followed procedures to maintain data quality. He also has access to status information about the data. The user receives data products with a known integrity level and a means to assess the impact of past events ont he conclusions derived from the data. To obtain these benefits, project managers should apply the Configuration Management discipline to data.
Development of new test procedures for measuring fine and coarse aggregates specific gravity.
DOT National Transportation Integrated Search
2009-09-01
The objective of the research is to develop and evaluate new test methods at determining the specific gravity and absorption of both fine and coarse aggregates. Current methods at determining the specific gravity and absorption of fine and coarse agg...
Resolving Dynamic Properties of Polymers through Coarse-Grained Computational Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salerno, K. Michael; Agrawal, Anupriya; Perahia, Dvora
2016-02-05
Coupled length and time scales determine the dynamic behavior of polymers and underlie their unique viscoelastic properties. To resolve the long-time dynamics it is imperative to determine which time and length scales must be correctly modeled. In this paper, we probe the degree of coarse graining required to simultaneously retain significant atomistic details and access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using linear polyethylene as a model system, we probe how the coarse-graining scale affects the measured dynamics. Iterative Boltzmann inversion ismore » used to derive coarse-grained potentials with 2–6 methylene groups per coarse-grained bead from a fully atomistic melt simulation. We show that atomistic detail is critical to capturing large-scale dynamics. Finally, using these models we simulate polyethylene melts for times over 500 μs to study the viscoelastic properties of well-entangled polymer melts.« less
Coarse and fine sediment transportation patterns and causes downstream of the Three Gorges Dam
NASA Astrophysics Data System (ADS)
Li, Songzhe; Yang, Yunping; Zhang, Mingjin; Sun, Zhaohua; Zhu, Lingling; You, Xingying; Li, Kanyu
2017-11-01
Reservoir construction within a basin affects the process of water and sediment transport downstream of the dam. The Three Gorges Reservoir (TGR) affects the sediment transport downstream of the dam. The impoundment of the TGR reduced total downstream sediment. The sediment group d≤0.125 mm (fine particle) increased along the path, but the average was still below what existed before the reservoir impoundment. The sediments group d>0.125 mm (coarse particle) was recharged in the Yichang to Jianli reach, but showed a deposition trend downstream of Jianli. The coarse sediment in the Yichang to Jianli section in 2003 to 2007 was above the value before the TGR impoundment. However, the increase of both coarse and fine sediments in 2008 to 2014 was less than that in 2003 to 2007. The sediment retained in the dam is the major reason for the sediment reduction downstream. However, the retention in different river reaches is affected by riverbed coarsening, discharge, flow process, and conditions of lake functioning and recharging from the tributaries. The main conclusions derived from our study are as follows: 1) The riverbed in the Yichang to Shashi section was relatively coarse, thereby limiting the supply of fine and coarse sediments. The fine sediment supply was mainly controlled by TGR discharge, whereas the coarse sediment supply was controlled by the duration of high flow and its magnitude. 2) The supply of both coarse and fine sediments in the Shashi to Jianli section was controlled by the amount of total discharge. The sediment supply from the riverbed was higher in flood years than that in the dry years. The coarse sediment tended to deposit, and the deposition in the dry years was larger than that in the flood years. 3) The feeding of the fine sediment in the Luoshan to Hankou section was mainly from the riverbed. The supply in 2008 to 2014 was more than that in 2003 to 2007. Around 2010, the coarse sediments transited from depositing to scouring that was probably caused by the increased duration of high flow days. 4) Fine sediments appeared to be deposited in large amounts in the Hankou to Jiujiang section. The coarse sediment was fed by the riverbed scouring, and much more coarse sediments were recharged from the riverbed in the flood years than in the dry years. 5) In the Jiujiang to Datong section, the ratio of fine sediments from the Poyang Lake and that from the riverbed was 1: 2.82. The sediment from the riverbed scouring contributed more to the coarse sediment transportation. The contribution was mainly affected by the input by magnitude and duration of high flows.
Löfström, Mikael
2010-01-01
For several years, the development of the Swedish public sector has been accompanied by a discussion about inter-organizational collaboration, which has been examined in several national experiments. The experience, however, indicates significant difficulties in implementing collaboration in local authorities' regular activities. This article argues that organizing inter-organizational collaboration in projects tends to be counterproductive, since the purpose of this collaboration is to increase the integration of local authorities. This article is based on case studies of three different collaboration projects. Each project is analyzed in relation to the way collaboration is organized within the project and how the relationship to the local authorities' activities is designed. The outcome of these studies shows that while collaboration projects increase integration between the responsible authorities, the integration stays within the projects. This is due to the fact that the projects were designed as units separate from the responsible authorities. As a result, the collaboration that occurs in the projects is not implemented in the local authorities' activities, and the viability of the increased integration of different responsible authorities does not extend beyond the projects. Copyright (c) 2009 John Wiley & Sons, Ltd.
Tesoro Los Angeles Refinery Integration and Compliance Project
EPA Region 9 has two announcements pertaining to the Los Angeles Refinery Integration and Compliance project (LARIC project): permit revisions meet all CAA requirements and federal PSD permitting provisions do not apply to this project.
NASA Astrophysics Data System (ADS)
Schöberl, Markus; Zabaras, Nicholas; Koutsourelakis, Phaedon-Stelios
2017-03-01
We propose a data-driven, coarse-graining formulation in the context of equilibrium statistical mechanics. In contrast to existing techniques which are based on a fine-to-coarse map, we adopt the opposite strategy by prescribing a probabilistic coarse-to-fine map. This corresponds to a directed probabilistic model where the coarse variables play the role of latent generators of the fine scale (all-atom) data. From an information-theoretic perspective, the framework proposed provides an improvement upon the relative entropy method [1] and is capable of quantifying the uncertainty due to the information loss that unavoidably takes place during the coarse-graining process. Furthermore, it can be readily extended to a fully Bayesian model where various sources of uncertainties are reflected in the posterior of the model parameters. The latter can be used to produce not only point estimates of fine-scale reconstructions or macroscopic observables, but more importantly, predictive posterior distributions on these quantities. Predictive posterior distributions reflect the confidence of the model as a function of the amount of data and the level of coarse-graining. The issues of model complexity and model selection are seamlessly addressed by employing a hierarchical prior that favors the discovery of sparse solutions, revealing the most prominent features in the coarse-grained model. A flexible and parallelizable Monte Carlo - Expectation-Maximization (MC-EM) scheme is proposed for carrying out inference and learning tasks. A comparative assessment of the proposed methodology is presented for a lattice spin system and the SPC/E water model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schöberl, Markus, E-mail: m.schoeberl@tum.de; Zabaras, Nicholas; Department of Aerospace and Mechanical Engineering, University of Notre Dame, 365 Fitzpatrick Hall, Notre Dame, IN 46556
We propose a data-driven, coarse-graining formulation in the context of equilibrium statistical mechanics. In contrast to existing techniques which are based on a fine-to-coarse map, we adopt the opposite strategy by prescribing a probabilistic coarse-to-fine map. This corresponds to a directed probabilistic model where the coarse variables play the role of latent generators of the fine scale (all-atom) data. From an information-theoretic perspective, the framework proposed provides an improvement upon the relative entropy method and is capable of quantifying the uncertainty due to the information loss that unavoidably takes place during the coarse-graining process. Furthermore, it can be readily extendedmore » to a fully Bayesian model where various sources of uncertainties are reflected in the posterior of the model parameters. The latter can be used to produce not only point estimates of fine-scale reconstructions or macroscopic observables, but more importantly, predictive posterior distributions on these quantities. Predictive posterior distributions reflect the confidence of the model as a function of the amount of data and the level of coarse-graining. The issues of model complexity and model selection are seamlessly addressed by employing a hierarchical prior that favors the discovery of sparse solutions, revealing the most prominent features in the coarse-grained model. A flexible and parallelizable Monte Carlo – Expectation–Maximization (MC-EM) scheme is proposed for carrying out inference and learning tasks. A comparative assessment of the proposed methodology is presented for a lattice spin system and the SPC/E water model.« less
NASA Astrophysics Data System (ADS)
Katsoulakis, Markos A.; Vlachos, Dionisios G.
2003-11-01
We derive a hierarchy of successively coarse-grained stochastic processes and associated coarse-grained Monte Carlo (CGMC) algorithms directly from the microscopic processes as approximations in larger length scales for the case of diffusion of interacting particles on a lattice. This hierarchy of models spans length scales between microscopic and mesoscopic, satisfies a detailed balance, and gives self-consistent fluctuation mechanisms whose noise is asymptotically identical to the microscopic MC. Rigorous, detailed asymptotics justify and clarify these connections. Gradient continuous time microscopic MC and CGMC simulations are compared under far from equilibrium conditions to illustrate the validity of our theory and delineate the errors obtained by rigorous asymptotics. Information theory estimates are employed for the first time to provide rigorous error estimates between the solutions of microscopic MC and CGMC, describing the loss of information during the coarse-graining process. Simulations under periodic boundary conditions are used to verify the information theory error estimates. It is shown that coarse-graining in space leads also to coarse-graining in time by q2, where q is the level of coarse-graining, and overcomes in part the hydrodynamic slowdown. Operation counting and CGMC simulations demonstrate significant CPU savings in continuous time MC simulations that vary from q3 for short potentials to q4 for long potentials. Finally, connections of the new coarse-grained stochastic processes to stochastic mesoscopic and Cahn-Hilliard-Cook models are made.
NASA Astrophysics Data System (ADS)
Lubecka, Emilia A.; Liwo, Adam
2017-09-01
Based on the theory of the construction of coarse-grained force fields for polymer chains described in our recent work [A. K. Sieradzan et al., J. Chem. Phys. 146, 124106 (2017)], in this work effective coarse-grained potentials, to be used in the SUGRES-1P model of polysaccharides that is being developed in our laboratory, have been determined for the O ⋯O ⋯O virtual-bond angles (θ ) and for the dihedral angles for rotation about the O ⋯O virtual bonds (γ ) of 1 → 4 -linked glucosyl polysaccharides, for all possible combinations of [α ,β ]-[d,l]-glucose. The potentials of mean force corresponding to the virtual-bond angles and the virtual-bond dihedral angles were calculated from the free-energy surfaces of [α ,β ]-[d,l]-glucose pairs, determined by umbrella-sampling molecular-dynamics simulations with the AMBER12 force field, or combinations of the surfaces of two pairs sharing the overlapping residue, respectively, by integrating the respective Boltzmann factor over the dihedral angles λ for the rotation of the sugar units about the O ⋯O virtual bonds. Analytical expressions were subsequently fitted to the potentials of mean force. The virtual-bond-torsional potentials depend on both virtual-bond-dihedral angles and virtual-bond angles. The virtual-bond-angle potentials contain a single minimum at about θ =14 0° for all pairs except β -d-[α ,β ] -l-glucose, where the global minimum is shifted to θ =150° and a secondary minimum appears at θ =90°. The torsional potentials favor small negative γ angles for the α -d-glucose and extended negative angles γ for the β -d-glucose chains, as observed in the experimental structures of starch and cellulose, respectively. It was also demonstrated that the approximate expression derived based on Kubo's cluster-cumulant theory, whose coefficients depend on the identity of the disugar units comprising a trisugar unit that defines a torsional potential, fits simultaneously all torsional potentials very well, thus reducing the number of parameters significantly.
The Desert Southwest Coarse Particulate Matter Study was undertaken of ambient concentrations and the composition of fine and coarse particles in rural, arid environments. Sampling was conducted in Pinal County, Arizona between February 2009 and February 2010. The goals of this ...
The Desert Southwest Coarse Particulate Matter Study was undertaken to further our understanding of the spatial and temporal variability and sources of fine and coarse particulate matter (PM) in rural, arid, desert environments. Sampling was conducted between February 2009 and Fe...
Experimental study on waves propagation over a coarse-grained sloping beach
NASA Astrophysics Data System (ADS)
Hsu, Tai-Wen; Lai, Jian-Wu
2013-04-01
This study investigates velocity fields of wave propagation over a coarse-grained sloping beach using laboratory experiments. The experiment was conducted in a wave flume of 25 m long, 0.5 m wide and 0.6 m high in which a coarse-grained sloping 1:5 beach was placed with two layers ball. The glass ball is D=7.9 cm and the center to center distance of each ball is 8.0 cm. The test section for observing wave and flow fields is located at the middle part of the flume. A piston type wave maker driven by an electromechanical hydraulic serve system is installed at the end of the flume. The intrinsic permeability Kp and turbulent drag coefficient Cf were obtained from steady flow water-head experiments. The flow velocity was measured by the particle image velocimeter (PIV) and digital image process (DIP) techniques. Eleven fields of view (FOVS) were integrated into a complete representation including the outer, surf and swash zone. Details of the definition sketch of the coarse-grained sloping beach model as well as experimental setup are referred to Lai et al. (2008). A high resolution of CCD camera was used to capture the images which was calibrated by the direct linear transform (DCT) algorithm proposed by Abed El-Aziz and Kar-Ara (1971). The water surface between the interface of air and water at each time step are calculated by Otsu' (1978) detect algorithm. The comparison shows that the water surface elevation observed by integrated image agrees well with that of Otsu' detection results. For the flow field measurement, each image pair was cross correlated with 32X32 pixel inter rogation window and a half overlap between adjacent windows. The repeatability and synchronization are the key elements for both wave motion and PIV technique. The wave profiles and flow field were compared during several wave periods to ensure that they can be reproduced by the present system. The water depth is kept as a constant of h=32 cm. The incident wave conditions are set to be wave height H0 = 3.86 cm or 7.75 cm and wave period T = 1.0 s. The illumination source of the PIV system is a dual-head frequency-doubled Nd:YAG laser, which has a maximum energy output of 120 mJ per pulse at two wavelengths of 523nm and 266nm. A synchronizer controls the emission time of a pulse laser beam as well as the camera exposure and shutter time. Linear wave theory (LWT) of wave propagation over a constant water depth was tested to validate the DIP/PIV algorithm. The comparison of velocity profiles in X and Z directions are in good agreement with those of LWT. Waves propagating over a coarse-grained sloping beach were investigated using PIV/DIP techniques. Detailed analysis of experimental results show that the flow field, turbulent intensity and vorticity are primarily located above the wave trough. A detailed description is provided in terms of free surface, velocity field, and turbulent energy transport. References 1. Abdel-Aziz, Karara.1971, Direct linear transformation into object space coordinates in closerange photogrametry. In Proc. Symp. Close-Range Photogrametry, 1-18. 2. Flow-3D (2008) user manual, version 9.3. 3. Otsu N. 1978. A threshold selection method from gray level histogram, IEEE Trans. on System, Man, and Cybernetics, 8, 62-66.
NASA Astrophysics Data System (ADS)
Screaton, E.; Kimura, G.; Curewitz, D.; Scientists, E.
2008-12-01
Integrated Ocean Drilling Program (IODP) Expedition 316 examined the frontal thrust and the shallow portion of the megasplay fault offshore of the Kii peninsula, and was the third drilling expedition of the Nankai Trough Seismogenic Zone Experiment (NanTroSEIZE). NanTroSEIZE will integrate seafloor observations, drilling, and observatories to investigate the processes controlling slip along subduction zone plate boundary fault systems. Site C0004 examined a shallow portion of the splay fault system where it overrides slope basin sediments. Site C0008, located in the slope basin 1 km seaward of Site C0004, provided a reference site for the footwall sediments. Results of drilling indicate that the footwall sediments have dewatered significantly, suggesting permeable routes for fluid escape. These high-permeability pathways might be provided by coarse-grained layers within the slope sediments. In situ dewatering and multiple fluid escape paths will tend to obscure any geochemical signature of flow from depth. Sites C0006 and C0007 examined the frontal thrust system. Although poorly recovered, coarse-grained trench sediments were sampled within the footwall. These permeable sediments would be expected to allow rapid escape of any fluid pressures due to loading. At both sites, low porosities are observed at shallow depths, suggesting removal of overlying material. This observation is consistent with interpretations that the prism is unstable and currently in a period of collapse. Anomalously low temperatures were measured within boreholes at these sites. One possible explanation for the low temperatures is circulation of seawater along normal faults in the unstable prism.
Evaluation of Embedded System Component Utilized in Delivery Integrated Design Project Course
NASA Astrophysics Data System (ADS)
Junid, Syed Abdul Mutalib Al; Hussaini, Yusnira; Nazmie Osman, Fairul; Razak, Abdul Hadi Abdul; Idros, Mohd Faizul Md; Karimi Halim, Abdul
2018-03-01
This paper reports the evaluation of the embedded system component utilized in delivering the integrated electronic engineering design project course. The evaluation is conducted based on the report project submitted as to fulfil the assessment criteria for the integrated electronic engineering design project course named; engineering system design. Six projects were assessed in this evaluation. The evaluation covers the type of controller, programming language and the number of embedded component utilization as well. From the evaluation, the C-programming based language is the best solution preferred by the students which provide them flexibility in the programming. Moreover, the Analog to Digital converter is intensively used in the projects which include sensors in their proposed design. As a conclusion, in delivering the integrated design project course, the knowledge over the embedded system solution is very important since the high density of the knowledge acquired in accomplishing the project assigned.
Constructing Optimal Coarse-Grained Sites of Huge Biomolecules by Fluctuation Maximization.
Li, Min; Zhang, John Zenghui; Xia, Fei
2016-04-12
Coarse-grained (CG) models are valuable tools for the study of functions of large biomolecules on large length and time scales. The definition of CG representations for huge biomolecules is always a formidable challenge. In this work, we propose a new method called fluctuation maximization coarse-graining (FM-CG) to construct the CG sites of biomolecules. The defined residual in FM-CG converges to a maximal value as the number of CG sites increases, allowing an optimal CG model to be rigorously defined on the basis of the maximum. More importantly, we developed a robust algorithm called stepwise local iterative optimization (SLIO) to accelerate the process of coarse-graining large biomolecules. By means of the efficient SLIO algorithm, the computational cost of coarse-graining large biomolecules is reduced to within the time scale of seconds, which is far lower than that of conventional simulated annealing. The coarse-graining of two huge systems, chaperonin GroEL and lengsin, indicates that our new methods can coarse-grain huge biomolecular systems with up to 10,000 residues within the time scale of minutes. The further parametrization of CG sites derived from FM-CG allows us to construct the corresponding CG models for studies of the functions of huge biomolecular systems.
NASA Astrophysics Data System (ADS)
Farrell, Kathryn; Oden, J. Tinsley
2014-07-01
Coarse-grained models of atomic systems, created by aggregating groups of atoms into molecules to reduce the number of degrees of freedom, have been used for decades in important scientific and technological applications. In recent years, interest in developing a more rigorous theory for coarse graining and in assessing the predictivity of coarse-grained models has arisen. In this work, Bayesian methods for the calibration and validation of coarse-grained models of atomistic systems in thermodynamic equilibrium are developed. For specificity, only configurational models of systems in canonical ensembles are considered. Among major challenges in validating coarse-grained models are (1) the development of validation processes that lead to information essential in establishing confidence in the model's ability predict key quantities of interest and (2), above all, the determination of the coarse-grained model itself; that is, the characterization of the molecular architecture, the choice of interaction potentials and thus parameters, which best fit available data. The all-atom model is treated as the "ground truth," and it provides the basis with respect to which properties of the coarse-grained model are compared. This base all-atom model is characterized by an appropriate statistical mechanics framework in this work by canonical ensembles involving only configurational energies. The all-atom model thus supplies data for Bayesian calibration and validation methods for the molecular model. To address the first challenge, we develop priors based on the maximum entropy principle and likelihood functions based on Gaussian approximations of the uncertainties in the parameter-to-observation error. To address challenge (2), we introduce the notion of model plausibilities as a means for model selection. This methodology provides a powerful approach toward constructing coarse-grained models which are most plausible for given all-atom data. We demonstrate the theory and methods through applications to representative atomic structures and we discuss extensions to the validation process for molecular models of polymer structures encountered in certain semiconductor nanomanufacturing processes. The powerful method of model plausibility as a means for selecting interaction potentials for coarse-grained models is discussed in connection with a coarse-grained hexane molecule. Discussions of how all-atom information is used to construct priors are contained in an appendix.
Local-scale projections of coral reef futures and implications of the Paris Agreement
NASA Astrophysics Data System (ADS)
van Hooidonk, Ruben; Maynard, Jeffrey; Tamelander, Jerker; Gove, Jamison; Ahmadia, Gabby; Raymundo, Laurie; Williams, Gareth; Heron, Scott F.; Planes, Serge
2016-12-01
Increasingly frequent severe coral bleaching is among the greatest threats to coral reefs posed by climate change. Global climate models (GCMs) project great spatial variation in the timing of annual severe bleaching (ASB) conditions; a point at which reefs are certain to change and recovery will be limited. However, previous model-resolution projections (~1 × 1°) are too coarse to inform conservation planning. To meet the need for higher-resolution projections, we generated statistically downscaled projections (4-km resolution) for all coral reefs; these projections reveal high local-scale variation in ASB. Timing of ASB varies >10 years in 71 of the 87 countries and territories with >500 km2 of reef area. Emissions scenario RCP4.5 represents lower emissions mid-century than will eventuate if pledges made following the 2015 Paris Climate Change Conference (COP21) become reality. These pledges do little to provide reefs with more time to adapt and acclimate prior to severe bleaching conditions occurring annually. RCP4.5 adds 11 years to the global average ASB timing when compared to RCP8.5; however, >75% of reefs still experience ASB before 2070 under RCP4.5. Coral reef futures clearly vary greatly among and within countries, indicating the projections warrant consideration in most reef areas during conservation and management planning.
Local-scale projections of coral reef futures and implications of the Paris Agreement.
van Hooidonk, Ruben; Maynard, Jeffrey; Tamelander, Jerker; Gove, Jamison; Ahmadia, Gabby; Raymundo, Laurie; Williams, Gareth; Heron, Scott F; Planes, Serge
2016-12-21
Increasingly frequent severe coral bleaching is among the greatest threats to coral reefs posed by climate change. Global climate models (GCMs) project great spatial variation in the timing of annual severe bleaching (ASB) conditions; a point at which reefs are certain to change and recovery will be limited. However, previous model-resolution projections (~1 × 1°) are too coarse to inform conservation planning. To meet the need for higher-resolution projections, we generated statistically downscaled projections (4-km resolution) for all coral reefs; these projections reveal high local-scale variation in ASB. Timing of ASB varies >10 years in 71 of the 87 countries and territories with >500 km 2 of reef area. Emissions scenario RCP4.5 represents lower emissions mid-century than will eventuate if pledges made following the 2015 Paris Climate Change Conference (COP21) become reality. These pledges do little to provide reefs with more time to adapt and acclimate prior to severe bleaching conditions occurring annually. RCP4.5 adds 11 years to the global average ASB timing when compared to RCP8.5; however, >75% of reefs still experience ASB before 2070 under RCP4.5. Coral reef futures clearly vary greatly among and within countries, indicating the projections warrant consideration in most reef areas during conservation and management planning.
Collaborative Teaching and Learning through Multi-Institutional Integrated Group Projects
ERIC Educational Resources Information Center
Long, Suzanna K.; Carlo, Héctor J.
2013-01-01
This teaching brief describes an innovative multi-institutional initiative through which integrated student groups from different courses collaborate on a common course project. In this integrated group project, students are asked to design a decentralized manufacturing organization for a company that will manufacture industrial Proton-Exchange…
Well Elderly Integrated Training Project. Final Report.
ERIC Educational Resources Information Center
Summit-Portage Area Health Education Network, Akron, OH.
The Well Elderly Integrated Training Project was conceptualized as a service-oriented endeavor with an evaluation component. The project required that a university medical school resource faculty develop an integrated training program and materials on health education (wellness) for trainers who were respected, healthy elderly high in the senior…
77 FR 59932 - Single Source Award; Exception to Competition
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-01
... Primary Care Integration Project. In fiscal year (FY) 2012, $486,394 will be available to fully fund this..., effectively, and efficiently implement the CHW Behavioral Health Primary Care Integration Project within their... qualified to carry out the CHW Behavioral Health Primary Care Integration Project because of their...
Found in Translation: Interdisciplinary Arts Integration in Project AIM
ERIC Educational Resources Information Center
Pruitt, Lara; Ingram, Debra; Weiss, Cynthia
2014-01-01
This paper will share the arts-integration methodology used in Project AIM and address the question; "How is translation evident in interdisciplinary arts instruction, and how does it affect students?" Methods: The staff and researchers from Project AIM, (an arts-integration program of the Center for Community Arts Partnerships at…
NASA Astrophysics Data System (ADS)
Kenney, M. A.; Mohrig, D.; Hobbs, B. F.; Parker, G.
2011-12-01
Land loss in the Mississippi River Delta caused by subsidence and erosion has resulted in habitat loss, interference with human activities, and increased exposure of New Orleans and other settled areas to storm surge risks. Prior to dam and levee building and oil and gas production in the 20th century, the long term rates of land building roughly balanced land loss through subsidence. Now, however, sediment is being deposited at dramatically lower rates in shallow areas in and adjacent to the Delta, with much of the remaining sediment borne by the Mississippi being lost to the deep areas of the Gulf of Mexico. A few projects have been built in order to divert sediment from the river to areas where land can be built, and many more are under consideration as part of State of Louisiana and Federal planning processes. Most are small scale, although there have been some proposals for large engineered avulsions that would divert a significant fraction of the remaining available sediment (W. Kim, et al. 2009, EOS). However, there is debate over whether small or large diversions are the economically optimally and socially most acceptable size of such land building projects. From an economic point of view, the optimal size involves tradeoffs between scale economies in civil work construction, the relationship between depth of diversion and sediment concentration in river water, effects on navigation, and possible diminishing returns to land building at a single location as the edge of built land progresses into deeper waters. Because land building efforts could potentially involve billions of dollars of investment, it is important to gain as much benefit as possible from those expenditures. We present the result of a general analysis of scale economies in land building from engineered avulsions. The analysis addresses the question: how many projects of what size should be built at what time in order to maximize the amount of land built by a particular time? The analysis integrates three models: 1. coarse sediment diversion as a function of the width, depth, and timing of water diversions (using our field measurements of sediment concentration as a function of depth), 2. land building as a function of the location, water, and amount of sediment diverted, accounting for bathymetry, subsidence, and other factors, and 3. cost of building and operating the necessary civil works. Our statistical analysis of past diversions indicates existence of scale economies in width and scale of diseconomies in depth. The analysis explores general relationships between size, cost, and land building, and does not consider specific actual project proposals or locations. Sensitivity to assumptions about fine sediment capture, accumulation rates for organic material, and other inputs will be discussed.
NASA Astrophysics Data System (ADS)
Broich, Mark
Humid tropical forest cover loss is threatening the sustainability of ecosystem goods and services as vast forest areas are rapidly cleared for industrial scale agriculture and tree plantations. Despite the importance of humid tropical forest in the provision of ecosystem services and economic development opportunities, the spatial and temporal distribution of forest cover loss across large areas is not well quantified. Here I improve the quantification of humid tropical forest cover loss using two remote sensing-based methods: sampling and wall-to-wall mapping. In all of the presented studies, the integration of coarse spatial, high temporal resolution data with moderate spatial, low temporal resolution data enable advances in quantifying forest cover loss in the humid tropics. Imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) are used as the source of coarse spatial resolution, high temporal resolution data and imagery from the Landsat Enhanced Thematic Mapper Plus (ETM+) sensor are used as the source of moderate spatial, low temporal resolution data. In a first study, I compare the precision of different sampling designs for the Brazilian Amazon using the annual deforestation maps derived by the Brazilian Space Agency for reference. I show that sampling designs can provide reliable deforestation estimates; furthermore, sampling designs guided by MODIS data can provide more efficient estimates than the systematic design used for the United Nations Food and Agricultural Organization Forest Resource Assessment 2010. Sampling approaches, such as the one demonstrated, are viable in regions where data limitations, such as cloud contamination, limit exhaustive mapping methods. Cloud-contaminated regions experiencing high rates of change include Insular Southeast Asia, specifically Indonesia and Malaysia. Due to persistent cloud cover, forest cover loss in Indonesia has only been mapped at a 5-10 year interval using photo interpretation of single best Landsat images. Such an approach does not provide timely results, and cloud cover reduces the utility of map outputs. In a second study, I develop a method to exhaustively mine the recently opened Landsat archive for cloud-free observations and automatically map forest cover loss for Sumatra and Kalimantan for the 2000-2005 interval. In a comparison with a reference dataset consisting of 64 Landsat sample blocks, I show that my method, using per pixel time-series, provides more accurate forest cover loss maps for multiyear intervals than approaches using image composites. In a third study, I disaggregate Landsat-mapped forest cover loss, mapped over a multiyear interval, by year using annual forest cover loss maps generated from coarse spatial, high temporal resolution MODIS imagery. I further disaggregate and analyze forest cover loss by forest land use, and provinces. Forest cover loss trends show high spatial and temporal variability. These results underline the importance of annual mapping for the quantification of forest cover loss in Indonesia, specifically in the light of the developing Reducing Emissions from Deforestation and Forest Degradation in Developing Countries policy framework (REDD). All three studies highlight the advances in quantifying forest cover loss in the humid tropics made by integrating coarse spatial, high temporal resolution data with moderate spatial, low temporal resolution data. The three methods presented can be combined into an integrated monitoring strategy.
Lu, Dengsheng; Batistella, Mateus; Moran, Emilio
2009-01-01
Traditional change detection approaches have been proven to be difficult in detecting vegetation changes in the moist tropical regions with multitemporal images. This paper explores the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data for vegetation change detection in the Brazilian Amazon. A principal component analysis was used to integrate TM and HRG panchromatic data. Vegetation change/non-change was detected with the image differencing approach based on the TM and HRG fused image and the corresponding TM image. A rule-based approach was used to classify the TM and HRG multispectral images into thematic maps with three coarse land-cover classes: forest, non-forest vegetation, and non-vegetation lands. A hybrid approach combining image differencing and post-classification comparison was used to detect vegetation change trajectories. This research indicates promising vegetation change techniques, especially for vegetation gain and loss, even if very limited reference data are available. PMID:19789721
The SIMPSONS project: An integrated Mars transportation system
NASA Astrophysics Data System (ADS)
Kaplan, Matthew; Carlson, Eric; Bradfute, Sherie; Allen, Kent; Duvergne, Francois; Hernandez, Bert; Le, David; Nguyen, Quan; Thornhill, Brett
In response to the Request for Proposal (RFP) for an integrated transportation system network for an advanced Martian base, Frontier Transportation Systems (FTS) presents the results of the SIMPSONS project (Systems Integration for Mars Planetary Surface Operations Networks). The following topics are included: the project background, vehicle design, future work, conclusions, management status, and cost breakdown. The project focuses solely on the surface-to-surface transportation at an advanced Martian base.
The 17th Project Integration Meeting
NASA Technical Reports Server (NTRS)
Mcdonald, R. R.
1981-01-01
Progress made by the Low-Cost Solar Array Project during the period September 1980 to February 1981 is described. Included are reports on project analysis and integration; technology development in silicon material, large-area silicon sheet and encapsulation; production process and equipment development; engineering, and operations. A report on and copies of visual presentations made at the Project Integration Meeting held at Pasadena, California on February 4 and 5, 1981 are also included.
The SIMPSONS project: An integrated Mars transportation system
NASA Technical Reports Server (NTRS)
Kaplan, Matthew; Carlson, Eric; Bradfute, Sherie; Allen, Kent; Duvergne, Francois; Hernandez, Bert; Le, David; Nguyen, Quan; Thornhill, Brett
1992-01-01
In response to the Request for Proposal (RFP) for an integrated transportation system network for an advanced Martian base, Frontier Transportation Systems (FTS) presents the results of the SIMPSONS project (Systems Integration for Mars Planetary Surface Operations Networks). The following topics are included: the project background, vehicle design, future work, conclusions, management status, and cost breakdown. The project focuses solely on the surface-to-surface transportation at an advanced Martian base.
NASA Technical Reports Server (NTRS)
Johnson, Chuck; Griner, James H.; Hayhurst, Kelly J.; Shively, Robert J.; Consiglio, Maria; Muller, Eric; Murphy, James; Kim, Sam
2012-01-01
UAS Integration in the NAS Project overview with details from each of the subprojects. Subprojects include: Communications, Certification, Integrated Test and Evaluation, Human Systems Integration, and Separation Assurance/Sense and Avoid Interoperability.
NASA Astrophysics Data System (ADS)
Bari, Md. Aynul; MacNeill, Morgan; Kindzierski, Warren B.; Wallace, Lance; Héroux, Marie-Ève; Wheeler, Amanda J.
2014-08-01
Exposure to coarse particulate matter (PM), i.e., particles with an aerodynamic diameter between 2.5 and 10 μm (PM10-2.5), is of increasing interest due to the potential for health effects including asthma, allergy and respiratory symptoms. Limited information is available on indoor and outdoor coarse PM and associated endotoxin exposures. Seven consecutive 24-h samples of indoor and outdoor coarse PM were collected during winter and summer 2010 using Harvard Coarse Impactors in a total of 74 Edmonton homes where no reported smoking took place. Coarse PM filters were subsequently analyzed for endotoxin content. Data were also collected on indoor and outdoor temperature, relative humidity, air exchange rate, housing characteristics and occupants' activities. During winter, outdoor concentrations of coarse PM (median = 6.7 μg/m3, interquartile range, IQR = 3.4-12 μg/m3) were found to be higher than indoor concentrations (median 3.4 μg/m3, IQR = 1.6-5.7 μg/m3); while summer levels of indoor and outdoor concentrations were similar (median 4.5 μg/m3, IQR = 2.3-6.8 μg/m3, and median 4.7 μg/m3, IQR = 2.1-7.9 μg/m3, respectively). Similar predictors were identified for indoor coarse PM in both seasons and included corresponding outdoor coarse PM concentrations, whether vacuuming, sweeping or dusting was performed during the sampling period, and number of occupants in the home. Winter indoor coarse PM predictors also included the number of dogs and indoor endotoxin concentrations. Summer median endotoxin concentrations (indoor: 0.41 EU/m3, outdoor: 0.64 EU/m3) were 4-fold higher than winter concentrations (indoor: 0.12 EU/m3, outdoor: 0.16 EU/m3). Other than outdoor endotoxin concentrations, indoor endotoxin concentration predictors for both seasons were different. Winter endotoxin predictors also included presence of furry pets and whether the vacuum had a high efficiency particulate air (HEPA) filter. Summer endotoxin predictors were problems with mice in the previous 12 months and mean indoor relative humidity levels.
Integrated Risk Management Within NASA Programs/Projects
NASA Technical Reports Server (NTRS)
Connley, Warren; Rad, Adrian; Botzum, Stephen
2004-01-01
As NASA Project Risk Management activities continue to evolve, the need to successfully integrate risk management processes across the life cycle, between functional disciplines, stakeholders, various management policies, and within cost, schedule and performance requirements/constraints become more evident and important. Today's programs and projects are complex undertakings that include a myriad of processes, tools, techniques, management arrangements and other variables all of which must function together in order to achieve mission success. The perception and impact of risk may vary significantly among stakeholders and may influence decisions that may have unintended consequences on the project during a future phase of the life cycle. In these cases, risks may be unintentionally and/or arbitrarily transferred to others without the benefit of a comprehensive systemic risk assessment. Integrating risk across people, processes, and project requirements/constraints serves to enhance decisions, strengthen communication pathways, and reinforce the ability of the project team to identify and manage risks across the broad spectrum of project management responsibilities. The ability to identify risks in all areas of project management increases the likelihood a project will identify significant issues before they become problems and allows projects to make effective and efficient use of shrinking resources. By getting a total team integrated risk effort, applying a disciplined and rigorous process, along with understanding project requirements/constraints provides the opportunity for more effective risk management. Applying an integrated approach to risk management makes it possible to do a better job at balancing safety, cost, schedule, operational performance and other elements of risk. This paper will examine how people, processes, and project requirements/constraints can be integrated across the project lifecycle for better risk management and ultimately improve the chances for mission success.
Metabolic Networks Integrative Cardiac Health Project (ICHP) - Center of Excellence
2016-08-01
Award Number: TITLE: Metabolic Networks Integrative Cardiac Health Project (ICHP) - Center of Excellence PRINCIPAL INVESTIGATOR: COL (Ret) Marina N...2016 2. REPORT TYPE FINAL 3. DATES COVERED (From - To) 29 Sep 2011 – 31 May 2016 4. TITLE AND SUBTITLE "Metabolic Networks Integrative Cardiac Health...ABSTRACT The Integrative Cardiac Health Project (ICHP) aims to lead the way in Cardiovascular Disease (CVD) Prevention by conducting novel research
Low NOx Fuel Flexible Combustor Integration Project Overview
NASA Technical Reports Server (NTRS)
Walton, Joanne C.; Chang, Clarence T.; Lee, Chi-Ming; Kramer, Stephen
2015-01-01
The Integrated Technology Demonstration (ITD) 40A Low NOx Fuel Flexible Combustor Integration development is being conducted as part of the NASA Environmentally Responsible Aviation (ERA) Project. Phase 2 of this effort began in 2012 and will end in 2015. This document describes the ERA goals, how the fuel flexible combustor integration development fulfills the ERA combustor goals, and outlines the work to be conducted during project execution.
NASA Astrophysics Data System (ADS)
Fu, S.-P.; Peng, Z.; Yuan, H.; Kfoury, R.; Young, Y.-N.
2017-01-01
Lipid bilayer membranes have been extensively studied by coarse-grained molecular dynamics simulations. Numerical efficiencies have been reported in the cases of aggressive coarse-graining, where several lipids are coarse-grained into a particle of size 4 ∼ 6 nm so that there is only one particle in the thickness direction. Yuan et al. proposed a pair-potential between these one-particle-thick coarse-grained lipid particles to capture the mechanical properties of a lipid bilayer membrane, such as gel-fluid-gas phase transitions of lipids, diffusion, and bending rigidity Yuan et al. (2010). In this work we implement such an interaction potential in LAMMPS to simulate large-scale lipid systems such as a giant unilamellar vesicle (GUV) and red blood cells (RBCs). We also consider the effect of cytoskeleton on the lipid membrane dynamics as a model for RBC dynamics, and incorporate coarse-grained water molecules to account for hydrodynamic interactions. The interaction between the coarse-grained water molecules (explicit solvent molecules) is modeled as a Lennard-Jones (L-J) potential. To demonstrate that the proposed methods do capture the observed dynamics of vesicles and RBCs, we focus on two sets of LAMMPS simulations: 1. Vesicle shape transitions with enclosed volume; 2. RBC shape transitions with different enclosed volume. Finally utilizing the parallel computing capability in LAMMPS, we provide some timing results for parallel coarse-grained simulations to illustrate that it is possible to use LAMMPS to simulate large-scale realistic complex biological membranes for more than 1 ms.
Parallel program debugging with flowback analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Jongdeok.
1989-01-01
This thesis describes the design and implementation of an integrated debugging system for parallel programs running on shared memory multi-processors. The goal of the debugging system is to present to the programmer a graphical view of the dynamic program dependences while keeping the execution-time overhead low. The author first describes the use of flowback analysis to provide information on causal relationship between events in a programs' execution without re-executing the program for debugging. Execution time overhead is kept low by recording only a small amount of trace during a program's execution. He uses semantic analysis and a technique called incrementalmore » tracing to keep the time and space overhead low. As part of the semantic analysis, he uses a static program dependence graph structure that reduces the amount of work done at compile time and takes advantage of the dynamic information produced during execution time. The cornerstone of the incremental tracing concept is to generate a coarse trace during execution and fill incrementally, during the interactive portion of the debugging session, the gap between the information gathered in the coarse trace and the information needed to do the flowback analysis using the coarse trace. Then, he describes how to extend the flowback analysis to parallel programs. The flowback analysis can span process boundaries; i.e., the most recent modification to a shared variable might be traced to a different process than the one that contains the current reference. The static and dynamic program dependence graphs of the individual processes are tied together with synchronization and data dependence information to form complete graphs that represent the entire program.« less
Use of wastes derived from earthquakes for the production of concrete masonry partition wall blocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao Zhao; Faculty of Architecture, Civil Engineering and Environment Engineering and Mechanics, Sichuan University; Ling, Tung-Chai
2011-08-15
Highlights: > Solved the scientific and technological challenges impeding use of waste rubble derived from earthquake, by providing an alternative solution of recycling the waste in moulded concrete block products. > Significant requirements for optimum integration on the utilization of the waste aggregates in the production of concrete blocks are investigated. > A thorough understanding of the mechanical properties of concrete blocks made with waste derived from earthquake is reported. - Abstract: Utilization of construction and demolition (C and D) wastes as recycled aggregates in the production of concrete and concrete products have attracted much attention in recent years. However,more » the presence of large quantities of crushed clay brick in some the C and D waste streams (e.g. waste derived collapsed masonry buildings after an earthquake) renders the recycled aggregates unsuitable for high grade use. One possibility is to make use of the low grade recycled aggregates for concrete block production. In this paper, we report the results of a comprehensive study to assess the feasibility of using crushed clay brick as coarse and fine aggregates in concrete masonry block production. The effects of the content of crushed coarse and fine clay brick aggregates (CBA) on the mechanical properties of non-structural concrete block were quantified. From the experimental test results, it was observed that incorporating the crushed clay brick aggregates had a significant influence on the properties of blocks. The hardened density and drying shrinkage of the block specimens decreased with an increase in CBA content. The use of CBA increased the water absorption of block specimens. The results suggested that the amount of crushed clay brick to be used in concrete masonry blocks should be controlled at less than 25% (coarse aggregate) and within 50-75% for fine aggregates.« less
Flinner, Nadine; Schleiff, Enrico
2015-01-01
Membranes are central for cells as borders to the environment or intracellular organelle definition. They are composed of and harbor different molecules like various lipid species and sterols, and they are generally crowded with proteins. The membrane system is very dynamic and components show lateral, rotational and translational diffusion. The consequence of the latter is that phase separation can occur in membranes in vivo and in vitro. It was documented that molecular dynamics simulations of an idealized plasma membrane model result in formation of membrane areas where either saturated lipids and cholesterol (liquid-ordered character, Lo) or unsaturated lipids (liquid-disordered character, Ld) were enriched. Furthermore, current discussions favor the idea that proteins are sorted into the liquid-disordered phase of model membranes, but experimental support for the behavior of isolated proteins in native membranes is sparse. To gain insight into the protein behavior we built a model of the red blood cell membrane with integrated glycophorin A dimer. The sorting and the dynamics of the dimer were subsequently explored by coarse-grained molecular dynamics simulations. In addition, we inspected the impact of lipid head groups and the presence of cholesterol within the membrane on the dynamics of the dimer within the membrane. We observed that cholesterol is important for the formation of membrane areas with Lo and Ld character. Moreover, it is an important factor for the reproduction of the dynamic behavior of the protein found in its native environment. The protein dimer was exclusively sorted into the domain of Ld character in the model red blood cell plasma membrane. Therefore, we present structural information on the glycophorin A dimer distribution in the plasma membrane in the absence of other factors like e.g. lipid anchors in a coarse grain resolution.
Root architecture impacts on root decomposition rates in switchgrass
NASA Astrophysics Data System (ADS)
de Graaff, M.; Schadt, C.; Garten, C. T.; Jastrow, J. D.; Phillips, J.; Wullschleger, S. D.
2010-12-01
Roots strongly contribute to soil organic carbon accrual, but the rate of soil carbon input via root litter decomposition is still uncertain. Root systems are built up of roots with a variety of different diameter size classes, ranging from very fine to very coarse roots. Since fine roots have low C:N ratios and coarse roots have high C:N ratios, root systems are heterogeneous in quality, spanning a range of different C:N ratios. Litter decomposition rates are generally well predicted by litter C:N ratios, thus decomposition of roots may be controlled by the relative abundance of fine versus coarse roots. With this study we asked how root architecture (i.e. the relative abundance of fine versus coarse roots) affects the decomposition of roots systems in the biofuels crop switchgrass (Panicum virgatum L.). To understand how root architecture affects root decomposition rates, we collected roots from eight switchgrass cultivars (Alamo, Kanlow, Carthage, Cave-in-Rock, Forestburg, Southlow, Sunburst, Blackwell), grown at FermiLab (IL), by taking 4.8-cm diameter soil cores from on top of the crown and directly next to the crown of individual plants. Roots were carefully excised from the cores by washing and analyzed for root diameter size class distribution using WinRhizo. Subsequently, root systems of each of the plants (4 replicates per cultivar) were separated in 'fine' (0-0.5 mm), 'medium' (0.5-1 mm) and 'coarse' roots (1-2.5 mm), dried, cut into 0.5 cm (medium and coarse roots) and 2 mm pieces (fine roots), and incubated for 90 days. For each of the cultivars we established five root-treatments: 20g of soil was amended with 0.2g of (1) fine roots, (2) medium roots, (3) coarse roots, (4) a 1:1:1 mixture of fine, medium and coarse roots, and (5) a mixture combining fine, medium and coarse roots in realistic proportions. We measured CO2 respiration at days 1, 3, 7, 15, 30, 60 and 90 during the experiment. The 13C signature of the soil was -26‰, and the 13C signature of plants was -12‰, enabling us to differentiate between root-derived C and native SOM-C respiration. We found that the relative abundance of fine, medium and coarse roots were significantly different among cultivars. Root systems of Alamo, Kanlow and Cave-in-Rock were characterized by a large abundance of coarse-, relative to fine roots, whereas Carthage, Forestburg and Blackwell had a large abundance of fine, relative to coarse roots. Fine roots had a 28% lower C:N ratio than medium and coarse roots. These differences led to different root decomposition rates. We conclude that root architecture should be taken into account when predicting root decomposition rates; enhanced understanding of the mechanisms of root decomposition will improve model predictions of C input to soil organic matter.
DEVELOPMENT AND EVALUATION OF A CONTINUOUS COARSE (PM10-PM2.5) PARTICLE MONITOR
In this paper, we describe the development and laboratory and field evaluation of a continuous coarse (2.5-10 um) particle mass (PM) monitor that can provide reliable measurements of the coarse mass (CM) concentrations in time intervals as short as 5-10 min. The operating princ...
USDA-ARS?s Scientific Manuscript database
We compared short-term effects of lug-soled boot trampling disturbance on water infiltration and soil erodibility on coarse-textured soils covered by a mixture of fine gravel and coarse sand over weak cyanobacterially-dominated biological soil crusts. Trampling significantly reduced final infiltrati...
Project management techniques for highly integrated programs
NASA Technical Reports Server (NTRS)
Stewart, J. F.; Bauer, C. A.
1983-01-01
The management and control of a representative, highly integrated high-technology project, in the X-29A aircraft flight test project is addressed. The X-29A research aircraft required the development and integration of eight distinct technologies in one aircraft. The project management system developed for the X-29A flight test program focuses on the dynamic interactions and the the intercommunication among components of the system. The insights gained from the new conceptual framework permitted subordination of departments to more functional units of decisionmaking, information processing, and communication networks. These processes were used to develop a project management system for the X-29A around the information flows that minimized the effects inherent in sampled-data systems and exploited the closed-loop multivariable nature of highly integrated projects.
NASA Astrophysics Data System (ADS)
Domack, Eugene W.; Taviani, Marco; Rodriguez, Anthonio
1999-11-01
Coarse, bioclastic rich sands have been widely reported from the banks of the Antarctic continental shelf but their origin is still poorly known. We report on a suite of coarse sediments recovered from the top of the Mawson Bank in the northwestern Ross Sea. Radiocarbon ages of biogenic calcite, for modern and apparently late Pleistocene deposits, range from 1085±45 to 20,895±250 yr B.P.. Discovery of soft tissue (Ascidian) preserved as an incrustation on a pebble at 2 m depth indicates aggregation of the sediment within several months or a year of core recovery. Radiocarbon ages of acid insoluble organic matter (aiom) are less than those of the foraminifera calcite. The aiom ages are also reversed in sequence, indicating reworking of the sediment during deposition. These observations and a review of recently published literature suggest that much of the bank top sediment in Antarctica is presently undergoing remobilization, under the influence of strong currents and/or icebergs even under interglacical (high-stand) sea levels. These observations point out the need for careful, integrated studies on high latitude marine sediment cores before resultant "ages" alone are used as the foundation for paleoglacial reconstructions.
Wilczynski, Bartek; Furlong, Eileen E M
2010-04-15
Development is regulated by dynamic patterns of gene expression, which are orchestrated through the action of complex gene regulatory networks (GRNs). Substantial progress has been made in modeling transcriptional regulation in recent years, including qualitative "coarse-grain" models operating at the gene level to very "fine-grain" quantitative models operating at the biophysical "transcription factor-DNA level". Recent advances in genome-wide studies have revealed an enormous increase in the size and complexity or GRNs. Even relatively simple developmental processes can involve hundreds of regulatory molecules, with extensive interconnectivity and cooperative regulation. This leads to an explosion in the number of regulatory functions, effectively impeding Boolean-based qualitative modeling approaches. At the same time, the lack of information on the biophysical properties for the majority of transcription factors within a global network restricts quantitative approaches. In this review, we explore the current challenges in moving from modeling medium scale well-characterized networks to more poorly characterized global networks. We suggest to integrate coarse- and find-grain approaches to model gene regulatory networks in cis. We focus on two very well-studied examples from Drosophila, which likely represent typical developmental regulatory modules across metazoans. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Granito, Vito Mario; Lunghini, Dario; Maggi, Oriana; Persiani, Anna Maria
2015-01-01
The authors conducted an ecological study of forests subjected to varying management. The aim of the study was to extend and integrate, within a multivariate context, knowledge of how saproxylic fungal communities behave along altitudinal/vegetational gradients in response to the varying features and quality of coarse woody debris (CWD). The intra-annual seasonal monitoring of saproxylic fungi, based on sporocarp inventories, was used to investigate saproxylic fungi in relation to vegetation types and management categories. We analyzed fungal species occurrence, recorded according to the presence/absence and frequency of sporocarps, on the basis of the harvest season, of coarse woody debris decay classes as well as other environmental and ecological variables. Two-way cluster analysis, DCA and Spearman's rank correlations, for indirect gradient analysis, were performed to identify any patterns of seasonality and decay. Most of the species were found on CWD in an intermediate decay stage. The first DCA axis revealed the vegetational/microclimate gradient as the main driver of fungal community composition, while the second axis corresponded to a strong gradient of CWD decay classes. © 2015 by The Mycological Society of America.
A Low Power Digital Accumulation Technique for Digital-Domain CMOS TDI Image Sensor.
Yu, Changwei; Nie, Kaiming; Xu, Jiangtao; Gao, Jing
2016-09-23
In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards the same object, the pixel array is divided into two groups: one is for coarse quantization of high bits only, and the other one is for fine quantization of low bits. Then, the complete quantization codes are composed of both results from the coarse-and-fine quantization. The equivalent operation comparably reduces the total required bit numbers of the quantization. In the 0.18 µm CMOS process, two versions of 16-stage digital domain CMOS TDI image sensor chains based on a 10-bit successive approximate register (SAR) analog-to-digital converter (ADC), with and without the proposed technique, are designed. The simulation results show that the average power consumption of slices of the two versions are 6 . 47 × 10 - 8 J/line and 7 . 4 × 10 - 8 J/line, respectively. Meanwhile, the linearity of the two versions are 99.74% and 99.99%, respectively.
Leyk, Stefan; Runfola, Dan; Nawrotzki, Raphael J; Hunter, Lori M; Riosmena, Fernando
2017-08-01
Migration provides a strategy for rural Mexican households to cope with, or adapt to, weather events and climatic variability. Yet prior studies on "environmental migration" in this context have not examined the differences between choices of internal (domestic) or international movement. In addition, much of the prior work relied on very coarse spatial scales to operationalize the environmental variables such as rainfall patterns. To overcome these limitations, we use fine-grain rainfall estimates derived from NASA's Tropical Rainfall Measuring Mission (TRMM) satellite. The rainfall estimates are combined with Population and Agricultural Census information to examine associations between environmental changes and municipal rates of internal and international migration 2005-2010. Our findings suggest that municipal-level rainfall deficits relative to historical levels are an important predictor of both international and internal migration, especially in areas dependent on seasonal rainfall for crop productivity. Although our findings do not contradict results of prior studies using coarse spatial resolution, they offer clearer results and a more spatially nuanced examination of migration as related to social and environmental vulnerability and thus higher degrees of confidence.
von Sperling, M
2015-01-01
This paper presents a comparison between three simple sewage treatment lines involving natural processes: (a) upflow anaerobic sludge blanket (UASB) reactor-three maturation ponds in series-coarse rock filter; (b) UASB reactor-horizontal subsurface-flow constructed wetland; and (c) vertical-flow constructed wetlands treating raw sewage (first stage of the French system). The evaluation was based on several years of practical experience with three small full-scale plants receiving the same influent wastewater (population equivalents of 220, 60 and 100 inhabitants) in the city of Belo Horizonte, Brazil. The comparison included interpretation of concentrations and removal efficiencies based on monitoring data (organic matter, solids, nitrogen, phosphorus, coliforms and helminth eggs), together with an evaluation of practical aspects, such as land and volume requirements, sludge production and handling, plant management, clogging and others. Based on an integrated evaluation of all aspects involved, it is worth emphasizing that each system has its own specificities, and no generalization can be made on the best option. The overall conclusion is that the three lines are suitable for sewage treatment in small communities in warm-climate regions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... of Vocational and Academic Learning Program? 425.1 Section 425.1 Education Regulations of the Offices... EDUCATION DEMONSTRATION PROJECTS FOR THE INTEGRATION OF VOCATIONAL AND ACADEMIC LEARNING PROGRAM General § 425.1 What is the Demonstration Projects for the Integration of Vocational and Academic Learning...
Code of Federal Regulations, 2011 CFR
2011-07-01
... of Vocational and Academic Learning Program? 425.1 Section 425.1 Education Regulations of the Offices... EDUCATION DEMONSTRATION PROJECTS FOR THE INTEGRATION OF VOCATIONAL AND ACADEMIC LEARNING PROGRAM General § 425.1 What is the Demonstration Projects for the Integration of Vocational and Academic Learning...
Code of Federal Regulations, 2012 CFR
2012-07-01
... of Vocational and Academic Learning Program? 425.1 Section 425.1 Education Regulations of the Offices... EDUCATION DEMONSTRATION PROJECTS FOR THE INTEGRATION OF VOCATIONAL AND ACADEMIC LEARNING PROGRAM General § 425.1 What is the Demonstration Projects for the Integration of Vocational and Academic Learning...
Code of Federal Regulations, 2014 CFR
2014-07-01
... of Vocational and Academic Learning Program? 425.1 Section 425.1 Education Regulations of the Offices... EDUCATION DEMONSTRATION PROJECTS FOR THE INTEGRATION OF VOCATIONAL AND ACADEMIC LEARNING PROGRAM General § 425.1 What is the Demonstration Projects for the Integration of Vocational and Academic Learning...
Downscaled projections of Caribbean coral bleaching that can inform conservation planning.
van Hooidonk, Ruben; Maynard, Jeffrey Allen; Liu, Yanyun; Lee, Sang-Ki
2015-09-01
Projections of climate change impacts on coral reefs produced at the coarse resolution (~1°) of Global Climate Models (GCMs) have informed debate but have not helped target local management actions. Here, projections of the onset of annual coral bleaching conditions in the Caribbean under Representative Concentration Pathway (RCP) 8.5 are produced using an ensemble of 33 Coupled Model Intercomparison Project phase-5 models and via dynamical and statistical downscaling. A high-resolution (~11 km) regional ocean model (MOM4.1) is used for the dynamical downscaling. For statistical downscaling, sea surface temperature (SST) means and annual cycles in all the GCMs are replaced with observed data from the ~4-km NOAA Pathfinder SST dataset. Spatial patterns in all three projections are broadly similar; the average year for the onset of annual severe bleaching is 2040-2043 for all projections. However, downscaled projections show many locations where the onset of annual severe bleaching (ASB) varies 10 or more years within a single GCM grid cell. Managers in locations where this applies (e.g., Florida, Turks and Caicos, Puerto Rico, and the Dominican Republic, among others) can identify locations that represent relative albeit temporary refugia. Both downscaled projections are different for the Bahamas compared to the GCM projections. The dynamically downscaled projections suggest an earlier onset of ASB linked to projected changes in regional currents, a feature not resolved in GCMs. This result demonstrates the value of dynamical downscaling for this application and means statistically downscaled projections have to be interpreted with caution. However, aside from west of Andros Island, the projections for the two types of downscaling are mostly aligned; projected onset of ASB is within ±10 years for 72% of the reef locations. © 2015 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
The integration of FPGA TDC inside White Rabbit node
NASA Astrophysics Data System (ADS)
Li, H.; Xue, T.; Gong, G.; Li, J.
2017-04-01
White Rabbit technology is capable of delivering sub-nanosecond accuracy and picosecond precision of synchronization and normal data packets over the fiber network. Carry chain structure in FPGA is a popular way to build TDC and tens of picosecond RMS resolution has been achieved. The integration of WR technology with FPGA TDC can enhance and simplify the TDC in many aspects that includes providing a low jitter clock for TDC, a synchronized absolute UTC/TAI timestamp for coarse counter, a fancy way to calibrate the carry chain DNL and an easy to use Ethernet link for data and control information transmit. This paper presents a FPGA TDC implemented inside a normal White Rabbit node with sub-nanosecond measurement precision. The measured standard deviation reaches 50ps between two distributed TDCs. Possible applications of this distributed TDC are also discussed.
Sub-grid scale models for discontinuous Galerkin methods based on the Mori-Zwanzig formalism
NASA Astrophysics Data System (ADS)
Parish, Eric; Duraisamy, Karthk
2017-11-01
The optimal prediction framework of Chorin et al., which is a reformulation of the Mori-Zwanzig (M-Z) formalism of non-equilibrium statistical mechanics, provides a framework for the development of mathematically-derived closure models. The M-Z formalism provides a methodology to reformulate a high-dimensional Markovian dynamical system as a lower-dimensional, non-Markovian (non-local) system. In this lower-dimensional system, the effects of the unresolved scales on the resolved scales are non-local and appear as a convolution integral. The non-Markovian system is an exact statement of the original dynamics and is used as a starting point for model development. In this work, we investigate the development of M-Z-based closures model within the context of the Variational Multiscale Method (VMS). The method relies on a decomposition of the solution space into two orthogonal subspaces. The impact of the unresolved subspace on the resolved subspace is shown to be non-local in time and is modeled through the M-Z-formalism. The models are applied to hierarchical discontinuous Galerkin discretizations. Commonalities between the M-Z closures and conventional flux schemes are explored. This work was supported in part by AFOSR under the project ''LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.
NASA Astrophysics Data System (ADS)
Wouters, Hendrik; De Ridder, Koen; Poelmans, Lien; Willems, Patrick; Brouwers, Johan; Hosseinzadehtalaei, Parisa; Tabari, Hossein; Vanden Broucke, Sam; van Lipzig, Nicole P. M.; Demuzere, Matthias
2017-09-01
Urban areas are usually warmer than their surrounding natural areas, an effect known as the urban heat island effect. As such, they are particularly vulnerable to global warming and associated increases in extreme temperatures. Yet ensemble climate-model projections are generally performed on a scale that is too coarse to represent the evolution of temperatures in cities. Here, for the first time, we combine unprecedented long-term (35 years) urban climate model integrations at the convection-permitting scale (2.8 km resolution) with information from an ensemble of general circulation models to assess temperature-based heat stress for Belgium, a densely populated midlatitude maritime region. We discover that the heat stress increase toward the mid-21st century is twice as large in cities compared to their surrounding rural areas. The exacerbation is driven by the urban heat island itself, its concurrence with heat waves, and urban expansion. Cities experience a heat stress multiplication by a factor 1.4 and 15 depending on the scenario. Remarkably, the future heat stress surpasses everywhere the urban hot spots of today. Our results demonstrate the need to combine information from climate models, acting on different scales, for climate change risk assessment in heterogeneous regions. Moreover, these results highlight the necessity for adaptation to increasing heat stress, especially in urban areas.
Satellite-Enhanced Dynamical Downscaling of Extreme Events
NASA Astrophysics Data System (ADS)
Nunes, A.
2015-12-01
Severe weather events can be the triggers of environmental disasters in regions particularly susceptible to changes in hydrometeorological conditions. In that regard, the reconstruction of past extreme weather events can help in the assessment of vulnerability and risk mitigation actions. Using novel modeling approaches, dynamical downscaling of long-term integrations from global circulation models can be useful for risk analysis, providing more accurate climate information at regional scales. Originally developed at the National Centers for Environmental Prediction (NCEP), the Regional Spectral Model (RSM) is being used in the dynamical downscaling of global reanalysis, within the South American Hydroclimate Reconstruction Project. Here, RSM combines scale-selective bias correction with assimilation of satellite-based precipitation estimates to downscale extreme weather occurrences. Scale-selective bias correction is a method employed in the downscaling, similar to the spectral nudging technique, in which the downscaled solution develops in agreement with its coarse boundaries. Precipitation assimilation acts on modeled deep-convection, drives the land-surface variables, and therefore the hydrological cycle. During the downscaling of extreme events that took place in Brazil in recent years, RSM continuously assimilated NCEP Climate Prediction Center morphing technique precipitation rates. As a result, RSM performed better than its global (reanalysis) forcing, showing more consistent hydrometeorological fields compared with more sophisticated global reanalyses. Ultimately, RSM analyses might provide better-quality initial conditions for high-resolution numerical predictions in metropolitan areas, leading to more reliable short-term forecasting of severe local storms.
Chen, Xuehui; Sun, Yunxiang; An, Xiongbo; Ming, Dengming
2011-10-14
Normal mode analysis of large biomolecular complexes at atomic resolution remains challenging in computational structure biology due to the requirement of large amount of memory space and central processing unit time. In this paper, we present a method called virtual interface substructure synthesis method or VISSM to calculate approximate normal modes of large biomolecular complexes at atomic resolution. VISSM introduces the subunit interfaces as independent substructures that join contacting molecules so as to keep the integrity of the system. Compared with other approximate methods, VISSM delivers atomic modes with no need of a coarse-graining-then-projection procedure. The method was examined for 54 protein-complexes with the conventional all-atom normal mode analysis using CHARMM simulation program and the overlap of the first 100 low-frequency modes is greater than 0.7 for 49 complexes, indicating its accuracy and reliability. We then applied VISSM to the satellite panicum mosaic virus (SPMV, 78,300 atoms) and to F-actin filament structures of up to 39-mer, 228,813 atoms and found that VISSM calculations capture functionally important conformational changes accessible to these structures at atomic resolution. Our results support the idea that the dynamics of a large biomolecular complex might be understood based on the motions of its component subunits and the way in which subunits bind one another. © 2011 American Institute of Physics
Ecology and space: A case study in mapping harmful invasive species
David T. Barnett,; Jarnevich, Catherine S.; Chong, Geneva W.; Stohlgren, Thomas J.; Sunil Kumar,; Holcombe, Tracy R.; Brunn, Stanley D.; Dodge, Martin
2017-01-01
The establishment and invasion of non-native plant species have the ability to alter the composition of native species and functioning of ecological systems with financial costs resulting from mitigation and loss of ecological services. Spatially documenting invasions has applications for management and theory, but the utility of maps is challenged by availability and uncertainty of data, and the reliability of extrapolating mapped data in time and space. The extent and resolution of projections also impact the ability to inform invasive species science and management. Early invasive species maps were coarse-grained representations that underscored the phenomena, but had limited capacity to direct management aside from development of watch lists for priorities for prevention and containment. Integrating mapped data sets with fine-resolution environmental variables in the context of species-distribution models allows a description of species-environment relationships and an understanding of how, why, and where invasions may occur. As with maps, the extent and resolution of models impact the resulting insight. Models of cheatgrass (Bromus tectorum) across a variety of spatial scales and grain result in divergent species-environment relationships. New data can improve models and efficiently direct further inventories. Mapping can target areas of greater model uncertainty or the bounds of modeled distribution to efficiently refine models and maps. This iterative process results in dynamic, living maps capable of describing the ongoing process of species invasions.
Beck, Tove K; Jensen, Sidsel; Simmelsgaard, Sonni Hansen; Kjeldsen, Chris; Kidmose, Ulla
2015-08-01
Vegetable intake seems to play a protective role against major lifestyle diseases. Despite this, the Danish population usually eats far less than the recommended daily intake. The present study focused on the intake of 17 coarse vegetables and the potential barriers limiting their intake. The present study drew upon a large Danish survey (n = 1079) to study the intake of coarse vegetables among Danish consumers. Four population clusters were identified based on their intake of 17 different coarse vegetables, and profiled according to hedonics, socio-demographic, health, and food lifestyle factors. The four clusters were characterized by a very low intake frequency of coarse vegetables ('low frequency'), a low intake frequency of coarse vegetables; but high intake frequency of carrots ('carrot eaters'), a moderate coarse vegetable intake frequency and high intake frequency of beetroot ('beetroot eaters'), and a high intake frequency of all coarse vegetables ('high frequency'). There was a relationship between reported liking and reported intake frequency for all tested vegetables. Preference for foods with a sweet, salty or bitter taste, in general, was also identified to be decisive for the reported vegetable intake, as these differed across the clusters. Each cluster had distinct socio-demographic, health and food lifestyle profiles. 'Low frequency' was characterized by uninvolved consumers with lack of interest in food, 'carrot eaters' vegetable intake was driven by health aspects, 'beetroot eaters' were characterized as traditional food consumers, and 'high frequency' were individuals with a strong food engagement and high vegetable liking. 'Low frequency' identified more barriers than other consumer clusters and specifically regarded low availability of pre-cut/prepared coarse vegetables on the market as a barrier. Across all clusters a low culinary knowledge was identified as the main barrier. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Skilton, Paul F.; Forsyth, David; White, Otis J.
2008-01-01
Building from research on learning in workplace project teams, the authors work forward from the idea that the principal condition enabling integration learning in student team projects is project complexity. Recognizing the challenges of developing and running complex student projects, the authors extend theory to propose that the experience of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongil Chun; Dohyeon Kim; Kwangyong Eun
TiC-Ni-Mo cermet specimens were prepared by using a mixture of fine (1.5 [mu]m) and coarse (30 [mu]m) TiC powders. When the fraction of fine TiC particles was 80%, a (Ti,Mo,Ni)C complex carbide phase was observed deposited on the coarse TiC particles and resulted in a typical cored structure. As the fraction of fine TiC particles decreased, the coarse TiC particles exhibited a unique microstructural evolution with the development of a concave interface. This microstructural change of the coarse TiC grains can be explained in terms of the coherency strain energy.
Characterization of coarse particulate matter in school gyms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Branis, Martin, E-mail: branis@natur.cuni.cz; Safranek, Jiri
2011-05-15
We investigated the mass concentration, mineral composition and morphology of particles resuspended by children during scheduled physical education in urban, suburban and rural elementary school gyms in Prague (Czech Republic). Cascade impactors were deployed to sample the particulate matter. Two fractions of coarse particulate matter (PM{sub 10-2.5} and PM{sub 2.5-1.0}) were characterized by gravimetry, energy dispersive X-ray spectrometry and scanning electron microscopy. Two indicators of human activity, the number of exercising children and the number of physical education hours, were also recorded. Lower mass concentrations of coarse particulate matter were recorded outdoors (average PM{sub 10-2.5} 4.1-7.4 {mu}g m{sup -3} andmore » PM{sub 2.5-1.0} 2.0-3.3 {mu}g m{sup -3}) than indoors (average PM{sub 10-2.5} 13.6-26.7 {mu}g m{sup -3} and PM{sub 2.5-1.0} 3.7-7.4 {mu}g m{sup -3}). The indoor concentrations of coarse aerosol were elevated during days with scheduled physical education with an average indoor-outdoor (I/O) ratio of 2.5-16.3 for the PM{sub 10-2.5} and 1.4-4.8 for the PM{sub 2.5-1.0} values. Under extreme conditions, the I/O ratios reached 180 (PM{sub 10-2.5}) and 19.1 (PM{sub 2.5-1.0}). The multiple regression analysis based on the number of students and outdoor coarse PM as independent variables showed that the main predictor of the indoor coarse PM concentrations is the number of students in the gym. The effect of outdoor coarse PM was weak and inconsistent. The regression models for the three schools explained 60-70% of the particular dataset variability. X-ray spectrometry revealed 6 main groups of minerals contributing to resuspended indoor dust. The most abundant particles were those of crustal origin composed of Si, Al, O and Ca. Scanning electron microscopy showed that, in addition to numerous inorganic particles, various types of fibers and particularly skin scales make up the main part of the resuspended dust in the gyms. In conclusion, school gyms were found to be indoor microenvironments with high concentrations of coarse particulate matter, which can contribute to increased short-term inhalation exposure of exercising children. - Highlights: {yields} We studied concentration, composition and morphology of coarse particles in gyms. {yields} Indoor concentration of coarse particles was high during days with pupils activity. {yields} Effect of outdoor coarse dust on indoor levels was weak and inconsistent. {yields} Six main groups of minerals contributing to indoor resuspended dust were determined. {yields} The most abundant coarse particles were human skin scales.« less
NASA UAS Integration into the NAS Project: Human Systems Integration
NASA Technical Reports Server (NTRS)
Shively, Jay
2016-01-01
This presentation provides an overview of the work the Human Systems Integration (HSI) sub-project has done on detect and avoid (DAA) displays while working on the UAS (Unmanned Aircraft System) Integration into the NAS project. The most recent simulation on DAA interoperability with Traffic Collision Avoidance System (TCAS) is discussed in the most detail. The relationship of the work to the larger UAS community and next steps are also detailed.
Metabolic Networks Integrative Cardiac Health Project (ICHP) - Center of Excellence
2016-04-01
2.6; P = 0.001) among all variables, as the most significant predictor of abnormal CIMT, thus increasing risk for CVD. Conclusions: The Integrative ...1 Award Number: W81XWH-11-2-0227 TITLE: "Metabolic Networks Integrative Cardiac Health Project (ICHP) - Center of Excellence." PRINCIPAL...April 2016 2. REPORT TYPE ANNUAL 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE "Metabolic Networks Integrative Cardiac Health Project (ICHP
The integrated project: a promising promotional strategy for primary health care.
Daniel, C; Mora, B
1985-10-01
The integrated project using parasite control and nutrition as entry points for family planning practice has shown considerable success in promoting health consciousness among health workers and project beneficiaries. This progress is evident in the Family Planning, Parasite Control and Nutrition (FAPPCAN) areas. The project has also mobilized technical and financial support from the local government as well as from private and civic organizations. The need for integration is underscored by the following considerations: parasite control has proved to be effective for preventive health care; the integrated project uses indigenous community health workers to accomplish its objectives; the primary health care (PHC) movement depends primarily on voluntary community participation and the integrated project has shown that it can elicit this participation. The major health problems in the Philippines are: a prevalence of communicable and other infectious diseases; poor evironmental sanitation; malnutrition; and a rapid population growth rate. The integrated program utilizes the existing village health workers in identifying problems related to family planning, parasite control and nutrition and integrates these activities into the health delivery system; educates family members on how to detect health and health-related problems; works out linkages with government agencies and the local primary health care committee in defining the scope of health-related problems; mobilizes community members to initiate their own projects; gets the commitment of village officials and committe members. The integrated project operates within the PHC. A health van with a built-in video playback system provides educational and logistical support to the village worker. The primary detection and treatment of health problems are part of the village health workers' responsibilities. Research determines the project's capability to reactivate the village primary health care committees and sustain community commitment. The project initially covered 4 villages. Implementation problems included: inactive village health workers, inadequate supervision and monitoring of PHC, a lack of commitment of committee members, and the lack of financial support.
NASA Technical Reports Server (NTRS)
Bayliss, A.
1978-01-01
The scattering of the sound of a jet engine by an airplane fuselage is modeled by solving the axially symmetric Helmholtz equation exterior to a long thin ellipsoid. The integral equation method based on the single layer potential formulation is used. A family of coordinate systems on the body is introduced and an algorithm is presented to determine the optimal coordinate system. Numerical results verify that the optimal choice enables the solution to be computed with a grid that is coarse relative to the wavelength.
Design and performance of a beetle-type double-tip scanning tunneling microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaschinsky, Philipp; Coenen, Peter; Pirug, Gerhard
2006-09-15
A combination of a double-tip scanning tunneling microscope with a scanning electron microscope in ultrahigh vacuum environment is presented. The compact beetle-type design made it possible to integrate two independently driven scanning tunneling microscopes in a small space. Moreover, an additional level for coarse movement allows the decoupling of the translation and approach of the tunneling tip. The position of the two tips can be controlled from the millimeter scale down to 50 nm with the help of an add-on electron microscope. The instrument is capable of atomic resolution imaging with each tip.
Polynomial approximation of Poincare maps for Hamiltonian system
NASA Technical Reports Server (NTRS)
Froeschle, Claude; Petit, Jean-Marc
1992-01-01
Different methods are proposed and tested for transforming a non-linear differential system, and more particularly a Hamiltonian one, into a map without integrating the whole orbit as in the well-known Poincare return map technique. We construct piecewise polynomial maps by coarse-graining the phase-space surface of section into parallelograms and using either only values of the Poincare maps at the vertices or also the gradient information at the nearest neighbors to define a polynomial approximation within each cell. The numerical experiments are in good agreement with both the real symplectic and Poincare maps.
Coarse Scale In Situ Albedo Observations over Heterogeneous Land Surfaces and Validation Strategy
NASA Astrophysics Data System (ADS)
Xiao, Q.; Wu, X.; Wen, J.; BAI, J., Sr.
2017-12-01
To evaluate and improve the quality of coarse-pixel land surface albedo products, validation with ground measurements of albedo is crucial over the spatially and temporally heterogeneous land surface. The performance of albedo validation depends on the quality of ground-based albedo measurements at a corresponding coarse-pixel scale, which can be conceptualized as the "truth" value of albedo at coarse-pixel scale. The wireless sensor network (WSN) technology provides access to continuously observe on the large pixel scale. Taking the albedo products as an example, this paper was dedicated to the validation of coarse-scale albedo products over heterogeneous surfaces based on the WSN observed data, which is aiming at narrowing down the uncertainty of results caused by the spatial scaling mismatch between satellite and ground measurements over heterogeneous surfaces. The reference value of albedo at coarse-pixel scale can be obtained through an upscaling transform function based on all of the observations for that pixel. We will devote to further improve and develop new method that that are better able to account for the spatio-temporal characteristic of surface albedo in the future. Additionally, how to use the widely distributed single site measurements over the heterogeneous surfaces is also a question to be answered. Keywords: Remote sensing; Albedo; Validation; Wireless sensor network (WSN); Upscaling; Heterogeneous land surface; Albedo truth at coarse-pixel scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, W.
2012-07-01
Recent assessment results indicate that the coarse-mesh finite-difference method (FDM) gives consistently smaller percent differences in channel powers than the fine-mesh FDM when compared to the reference MCNP solution for CANDU-type reactors. However, there is an impression that the fine-mesh FDM should always give more accurate results than the coarse-mesh FDM in theory. To answer the question if the better performance of the coarse-mesh FDM for CANDU-type reactors was just a coincidence (cancellation of errors) or caused by the use of heavy water or the use of lattice-homogenized cross sections for the cluster fuel geometry in the diffusion calculation, threemore » benchmark problems were set up with three different fuel lattices: CANDU, HWR and PWR. These benchmark problems were then used to analyze the root cause of the better performance of the coarse-mesh FDM for CANDU-type reactors. The analyses confirm that the better performance of the coarse-mesh FDM for CANDU-type reactors is mainly caused by the use of lattice-homogenized cross sections for the sub-meshes of the cluster fuel geometry in the diffusion calculation. Based on the analyses, it is recommended to use 2 x 2 coarse-mesh FDM to analyze CANDU-type reactors when lattice-homogenized cross sections are used in the core analysis. (authors)« less
Highly Coarse-Grained Representations of Transmembrane Proteins
2017-01-01
Numerous biomolecules and biomolecular complexes, including transmembrane proteins (TMPs), are symmetric or at least have approximate symmetries. Highly coarse-grained models of such biomolecules, aiming at capturing the essential structural and dynamical properties on resolution levels coarser than the residue scale, must preserve the underlying symmetry. However, making these models obey the correct physics is in general not straightforward, especially at the highly coarse-grained resolution where multiple (∼3–30 in the current study) amino acid residues are represented by a single coarse-grained site. In this paper, we propose a simple and fast method of coarse-graining TMPs obeying this condition. The procedure involves partitioning transmembrane domains into contiguous segments of equal length along the primary sequence. For the coarsest (lowest-resolution) mappings, it turns out to be most important to satisfy the symmetry in a coarse-grained model. As the resolution is increased to capture more detail, however, it becomes gradually more important to match modular repeats in the secondary structure (such as helix-loop repeats) instead. A set of eight TMPs of various complexity, functionality, structural topology, and internal symmetry, representing different classes of TMPs (ion channels, transporters, receptors, adhesion, and invasion proteins), has been examined. The present approach can be generalized to other systems possessing exact or approximate symmetry, allowing for reliable and fast creation of multiscale, highly coarse-grained mappings of large biomolecular assemblies. PMID:28043122
NASA Astrophysics Data System (ADS)
Andrade, Fatima; Orsini, Celso; Maenhaut, Willy
Stacked filter units were used to collect atmospheric particles in separate coarse and fine fractions at the Sao Paulo University Campus during the winter of 1989. The samples were analysed by particle-induced X-ray emission (PIXE) and the data were subjected to an absolute principal component analysis (APCA). Five sources were identified for the fine particles: industrial emissions, which accounted for 13% of the fine mass; emissions from residual oil and diesel, explaining 41%; resuspended soil dust, with 28%; and emissions of Cu and of Mg, together with 18%. For the coarse particles, four sources were identified: soil dust, accounting for 59% of the coarse mass; industrial emissions, with 19%; oil burning, with 8%; and sea salt aerosol, with 14% of the coarse mass. A data set with various meteorological parameters was also subjected to APCA, and a correlation analysis was performed between the meteorological "absolute principal component scores" (APCS) and the APCS from the fine and coarse particle data sets. The soil dust sources for the fine and coarse aerosol were highly correlated with each other and were anticorrelated with the sea breeze component. The industrial components in the fine and coarse size fractions were also highly positively correlated. Furthermore, the industrial component was related with the northeasterly wind direction and, to a lesser extent, with the sea breeze component.
Satellite-Scale Snow Water Equivalent Assimilation into a High-Resolution Land Surface Model
NASA Technical Reports Server (NTRS)
De Lannoy, Gabrielle J.M.; Reichle, Rolf H.; Houser, Paul R.; Arsenault, Kristi R.; Verhoest, Niko E.C.; Paulwels, Valentijn R.N.
2009-01-01
An ensemble Kalman filter (EnKF) is used in a suite of synthetic experiments to assimilate coarse-scale (25 km) snow water equivalent (SWE) observations (typical of satellite retrievals) into fine-scale (1 km) model simulations. Coarse-scale observations are assimilated directly using an observation operator for mapping between the coarse and fine scales or, alternatively, after disaggregation (re-gridding) to the fine-scale model resolution prior to data assimilation. In either case observations are assimilated either simultaneously or independently for each location. Results indicate that assimilating disaggregated fine-scale observations independently (method 1D-F1) is less efficient than assimilating a collection of neighboring disaggregated observations (method 3D-Fm). Direct assimilation of coarse-scale observations is superior to a priori disaggregation. Independent assimilation of individual coarse-scale observations (method 3D-C1) can bring the overall mean analyzed field close to the truth, but does not necessarily improve estimates of the fine-scale structure. There is a clear benefit to simultaneously assimilating multiple coarse-scale observations (method 3D-Cm) even as the entire domain is observed, indicating that underlying spatial error correlations can be exploited to improve SWE estimates. Method 3D-Cm avoids artificial transitions at the coarse observation pixel boundaries and can reduce the RMSE by 60% when compared to the open loop in this study.
Park, Seung Bum; Jang, Young Il; Lee, Jun; Lee, Byung Jae
2009-07-15
This study evaluates quality properties and toxicity of coal bottom ash coarse aggregate and analyzes mechanical properties of porous concrete depending on mixing rates of coal bottom ash. As a result, soundness and resistance to abrasion of coal bottom ash coarse aggregate were satisfied according to the standard of coarse aggregate for concrete. To satisfy the standard pertaining to chloride content, the coarse aggregates have to be washed more than twice. In regards to the result of leaching test for coal bottom ash coarse aggregate and porous concrete produced with these coarse aggregates, it was satisfied with the environment criteria. As the mixing rate of coal bottom ash increased, influence of void ratio and permeability coefficient was very little, but compressive and flexural strength decreased. When coal bottom ash was mixed over 40%, strength decreased sharply (compressive strength: by 11.7-27.1%, flexural strength: by maximum 26.4%). Also, as the mixing rate of coal bottom ash increased, it was confirmed that test specimens were destroyed by aggregate fracture more than binder fracture and interface fracture. To utilize coal bottom ash in large quantities, it is thought that an improvement method in regards to strength has to be discussed such as incorporation of reinforcing materials and improvement of aggregate hardness.
Interlaced coarse-graining for the dynamical cluster approximation
NASA Astrophysics Data System (ADS)
Haehner, Urs; Staar, Peter; Jiang, Mi; Maier, Thomas; Schulthess, Thomas
The negative sign problem remains a challenging limiting factor in quantum Monte Carlo simulations of strongly correlated fermionic many-body systems. The dynamical cluster approximation (DCA) makes this problem less severe by coarse-graining the momentum space to map the bulk lattice to a cluster embedded in a dynamical mean-field host. Here, we introduce a new form of an interlaced coarse-graining and compare it with the traditional coarse-graining. We show that it leads to more controlled results with weaker cluster shape and smoother cluster size dependence, which with increasing cluster size converge to the results obtained using the standard coarse-graining. In addition, the new coarse-graining reduces the severity of the fermionic sign problem. Therefore, it enables calculations on much larger clusters and can allow the evaluation of the exact infinite cluster size result via finite size scaling. To demonstrate this, we study the hole-doped two-dimensional Hubbard model and show that the interlaced coarse-graining in combination with the DCA+ algorithm permits the determination of the superconducting Tc on cluster sizes, for which the results can be fitted with the Kosterlitz-Thouless scaling law. This research used resources of the Oak Ridge Leadership Computing Facility (OLCF) awarded by the INCITE program, and of the Swiss National Supercomputing Center. OLCF is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.
A Policy Guide on Integrated Care (PGIC): Lessons Learned from EU Project INTEGRATE and Beyond
Devroey, Dirk
2017-01-01
Efforts are underway in many European countries to channel efforts into creating improved integrated health and social care services. But most countries lack a strategic plan that is sustainable over time, and that reflects a comprehensive systems perspective. The Policy Guide on Integrated Care (PGIC) as presented in this paper resulted from experiences with the EU Project INTEGRATE and our own work with healthcare reform for patients with chronic conditions at the national and international level. This project is one of the largest EU funded projects on Integrated Care, conducted over a four-year period (2012–2016) and included partners from nine European countries. Project Integrate aimed to gain insights into the leadership, management and delivery of integrated care to support European care systems to respond to the challenges of ageing populations and the rise of people living with long-term conditions. The objective of this paper is to describe the PGIC as both a tool and a reasoning flow that aims at supporting policy makers at the national and international level with the development and implementation of integrated care. Any Policy Guide on Integrated should build upon three building blocks, being a mission, vision and a strategy that aim at capturing the large amount of factors that directly or indirectly influence the successful development of integrated care. PMID:29588631
A Policy Guide on Integrated Care (PGIC): Lessons Learned from EU Project INTEGRATE and Beyond.
Borgermans, Liesbeth; Devroey, Dirk
2017-09-25
Efforts are underway in many European countries to channel efforts into creating improved integrated health and social care services. But most countries lack a strategic plan that is sustainable over time, and that reflects a comprehensive systems perspective. The Policy Guide on Integrated Care (PGIC) as presented in this paper resulted from experiences with the EU Project INTEGRATE and our own work with healthcare reform for patients with chronic conditions at the national and international level. This project is one of the largest EU funded projects on Integrated Care, conducted over a four-year period (2012-2016) and included partners from nine European countries. Project Integrate aimed to gain insights into the leadership, management and delivery of integrated care to support European care systems to respond to the challenges of ageing populations and the rise of people living with long-term conditions. The objective of this paper is to describe the PGIC as both a tool and a reasoning flow that aims at supporting policy makers at the national and international level with the development and implementation of integrated care. Any Policy Guide on Integrated should build upon three building blocks, being a mission, vision and a strategy that aim at capturing the large amount of factors that directly or indirectly influence the successful development of integrated care.
High Energy Replicated Optics to Explore the Sun: Hard X-Ray Balloon-Borne Telescope
NASA Technical Reports Server (NTRS)
Gaskin, Jessica; Apple, Jeff; StevensonChavis, Katherine; Dietz, Kurt; Holt, Marlon; Koehler, Heather; Lis, Tomasz; O'Connor, Brian; RodriquezOtero, Miguel; Pryor, Jonathan;
2013-01-01
Set to fly in the Fall of 2013 from Ft. Sumner, NM, the High Energy Replicated Optics to Explore the Sun (HEROES) mission is a collaborative effort between the NASA Marshall Space Flight Center and the Goddard Space Flight Center to upgrade an existing payload, the High Energy Replicated Optics (HERO) balloon-borne telescope, to make unique scientific measurements of the Sun and astrophysical targets during the same flight. The HEROES science payload consists of 8 mirror modules, housing a total of 109 grazing-incidence optics. These modules are mounted on a carbon-fiber - and Aluminum optical bench 6 m from a matching array of high pressure xenon gas scintillation proportional counters, which serve as the focal-plane detectors. The HERO gondola utilizes a differential GPS system (backed by a magnetometer) for coarse pointing in the azimuth and a shaft angle encoder plus inclinometer provides the coarse elevation. The HEROES payload will incorporate a new solar aspect system to supplement the existing star camera, for fine pointing during both the day and night. A mechanical shutter will be added to the star camera to protect it during solar observations. HEROES will also implement two novel alignment monitoring system that will measure the alignment between the optical bench and the star camera and between the optics and detectors for improved pointing and post-flight data reconstruction. The overall payload will also be discussed. This mission is funded by the NASA HOPE (Hands On Project Experience) Training Opportunity awarded by the NASA Academy of Program/Project and Engineering Leadership, in partnership with NASA's Science Mission Directorate, Office of the Chief Engineer and Office of the Chief Technologist
NASA Astrophysics Data System (ADS)
Matangulu Shrestha, Victor; Anandh, S.; Sindhu Nachiar, S.
2017-07-01
Concrete is a heterogeneous mixture constitute of cement as the main ingredient with a different mix of fine and coarse aggregate. The massive use of conventional concrete has a shortfall in its key ingredients, natural sand and coarse aggregate, due to increased industrialisation and globalisation. To overcome the shortage of material, an alternate material with similar mechanical properties and composition has to be studied, as replacement of conventional concrete. Coconut shell concrete is a prime option as replacement of key ingredients of conventional concrete as coconut is produced in massive quantity in south East Asia. Coconut shell concrete is lightweight concrete and different research is still ongoing concerning about its mix design and composition in the construction industry. Concrete is weak in tension as compared to compression, hence the fibre is used to refrain the crack in the concrete. Coconut fibre is one of many fibres which can be used in concrete. The main aim of this project is to analyse the use of natural by-products in the construction industry, make light weight concrete and eco-friendly construction. This project concerns with the comparison of the mechanical properties of coconut shell concrete and conventional concrete, replacing fine aggregate with quarry dust using coconut fibre. M25 grade of concrete was adopted and testing of concrete was done at the age of 3, 7 and 28 days. In this concrete mix, sand was replaced completely in volumetric measurement by quarry dust. The result was analysed and compared with addition of coconut fibre at varying percentage of 1%, 2%, 3%, 4% and 5%. From the test conducted, coconut shell concrete with quarry dust has the maximum value at 4% of coconut fibre while conventional concrete showed the maximum value at 2% of coconut fibre.
NASA Astrophysics Data System (ADS)
Prein, A. F.; Ikeda, K.; Liu, C.; Bullock, R.; Rasmussen, R.
2016-12-01
Convective storms are causing extremes such as flooding, landslides, and wind gusts and are related to the development of tornadoes and hail. Convective storms are also the dominant source of summer precipitation in most regions of the Contiguous United States. So far little is known about how convective storms might change due to global warming. This is mainly because of the coarse grid spacing of state-of-the-art climate models that are not able to resolve deep convection explicitly. Instead, coarse resolution models rely on convective parameterization schemes that are a major source of errors and uncertainties in climate change projections. Convection-permitting climate simulations, with grid-spacings smaller than 4 km, show significant improvements in the simulation of convective storms by representing deep convection explicitly. Here we use a pair of 13-year long current and future convection-permitting climate simulations that cover large parts of North America. We use the Method for Object-Based Diagnostic Evaluation (MODE) that incorporates the time dimension (MODE-TD) to analyze the model performance in reproducing storm features in the current climate and to investigate their potential future changes. We show that the model is able to accurately reproduce the main characteristics of convective storms in the present climate. The comparison with the future climate simulation shows that convective storms significantly increase in frequency, intensity, and size. Furthermore, they are projected to move slower which could result in a substantial increase in convective storm-related hazards such as flash floods, debris flows, and landslides. Some regions, such as the North Atlantic, might experience a regime shift that leads to significantly stronger storms that are unrepresented in the current climate.
High Energy Replicated Optics to Explore the Sun: Hard X-ray balloon-borne telescope
NASA Astrophysics Data System (ADS)
Gaskin, J.; Apple, J.; Chavis, K. S.; Dietz, K.; Holt, M.; Koehler, H.; Lis, T.; O'Connor, B.; Otero, M. R.; Pryor, J.; Ramsey, B.; Rinehart-Dawson, M.; Smith, L.; Sobey, A.; Wilson-Hodge, C.; Christe, S.; Cramer, A.; Edgerton, M.; Rodriguez, M.; Shih, A.; Gregory, D.; Jasper, J.; Bohon, S.
Set to fly in the Fall of 2013 from Ft. Sumner, NM, the High Energy Replicated Optics to Explore the Sun (HEROES) mission is a collaborative effort between the NASA Marshall Space Flight Center and the Goddard Space Flight Center to upgrade an existing payload, the High Energy Replicated Optics (HERO) balloon-borne telescope, to make unique scientific measurements of the Sun and astrophysical targets during the same flight. The HEROES science payload consists of 8 mirror modules, housing a total of 109 grazing-incidence optics. These modules are mounted on a carbon-fiber - and Aluminum optical bench 6 m from a matching array of high pressure xenon gas scintillation proportional counters, which serve as the focal-plane detectors. The HERO gondola utilizes a differential GPS system (backed by a magnetometer) for coarse pointing in the azimuth and a shaft angle encoder plus inclinometer provides the coarse elevation. The HEROES payload will incorporate a new solar aspect system to supplement the existing star camera, for fine pointing during both the day and night. A mechanical shutter will be added to the star camera to protect it during solar observations. HEROES will also implement two novel alignment monitoring system that will measure the alignment between the optical bench and the star camera and between the optics and detectors for improved pointing and post-flight data reconstruction. The overall payload will also be discussed. This mission is funded by the NASA HOPE (Hands On Project Experience) Training Opportunity awarded by the NASA Academy of Program/Project and Engineering Leadership, in partnership with NASA's Science Mission Directorate, Office of the Chief Engineer and Office of the Chief Technologist.
Multiscale Simulation Framework for Coupled Fluid Flow and Mechanical Deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Thomas; Efendiev, Yalchin; Tchelepi, Hamdi
2016-05-24
Our work in this project is aimed at making fundamental advances in multiscale methods for flow and transport in highly heterogeneous porous media. The main thrust of this research is to develop a systematic multiscale analysis and efficient coarse-scale models that can capture global effects and extend existing multiscale approaches to problems with additional physics and uncertainties. A key emphasis is on problems without an apparent scale separation. Multiscale solution methods are currently under active investigation for the simulation of subsurface flow in heterogeneous formations. These procedures capture the effects of fine-scale permeability variations through the calculation of specialized coarse-scalemore » basis functions. Most of the multiscale techniques presented to date employ localization approximations in the calculation of these basis functions. For some highly correlated (e.g., channelized) formations, however, global effects are important and these may need to be incorporated into the multiscale basis functions. Other challenging issues facing multiscale simulations are the extension of existing multiscale techniques to problems with additional physics, such as compressibility, capillary effects, etc. In our project, we explore the improvement of multiscale methods through the incorporation of additional (single-phase flow) information and the development of a general multiscale framework for flows in the presence of uncertainties, compressible flow and heterogeneous transport, and geomechanics. We have considered (1) adaptive local-global multiscale methods, (2) multiscale methods for the transport equation, (3) operator-based multiscale methods and solvers, (4) multiscale methods in the presence of uncertainties and applications, (5) multiscale finite element methods for high contrast porous media and their generalizations, and (6) multiscale methods for geomechanics.« less
Multiscale analysis and computation for flows in heterogeneous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Efendiev, Yalchin; Hou, T. Y.; Durlofsky, L. J.
Our work in this project is aimed at making fundamental advances in multiscale methods for flow and transport in highly heterogeneous porous media. The main thrust of this research is to develop a systematic multiscale analysis and efficient coarse-scale models that can capture global effects and extend existing multiscale approaches to problems with additional physics and uncertainties. A key emphasis is on problems without an apparent scale separation. Multiscale solution methods are currently under active investigation for the simulation of subsurface flow in heterogeneous formations. These procedures capture the effects of fine-scale permeability variations through the calculation of specialized coarse-scalemore » basis functions. Most of the multiscale techniques presented to date employ localization approximations in the calculation of these basis functions. For some highly correlated (e.g., channelized) formations, however, global effects are important and these may need to be incorporated into the multiscale basis functions. Other challenging issues facing multiscale simulations are the extension of existing multiscale techniques to problems with additional physics, such as compressibility, capillary effects, etc. In our project, we explore the improvement of multiscale methods through the incorporation of additional (single-phase flow) information and the development of a general multiscale framework for flows in the presence of uncertainties, compressible flow and heterogeneous transport, and geomechanics. We have considered (1) adaptive local-global multiscale methods, (2) multiscale methods for the transport equation, (3) operator-based multiscale methods and solvers, (4) multiscale methods in the presence of uncertainties and applications, (5) multiscale finite element methods for high contrast porous media and their generalizations, and (6) multiscale methods for geomechanics. Below, we present a brief overview of each of these contributions.« less
Bayless, E. Randall; Arihood, Leslie D.; Reeves, Howard W.; Sperl, Benjamin J.S.; Qi, Sharon L.; Stipe, Valerie E.; Bunch, Aubrey R.
2017-01-18
As part of the National Water Availability and Use Program established by the U.S. Geological Survey (USGS) in 2005, this study took advantage of about 14 million records from State-managed collections of water-well drillers’ records and created a database of hydrogeologic properties for the glaciated United States. The water-well drillers’ records were standardized to be relatively complete and error-free and to provide consistent variables and naming conventions that span all State boundaries.Maps and geospatial grids were developed for (1) total thickness of glacial deposits, (2) total thickness of coarse-grained deposits, (3) specific-capacity based transmissivity and hydraulic conductivity, and (4) texture-based estimated equivalent horizontal and vertical hydraulic conductivity and transmissivity. The information included in these maps and grids is required for most assessments of groundwater availability, in addition to having applications to studies of groundwater flow and transport. The texture-based estimated equivalent horizontal and vertical hydraulic conductivity and transmissivity were based on an assumed range of hydraulic conductivity values for coarse- and fine-grained deposits and should only be used with complete awareness of the methods used to create them. However, the maps and grids of texture-based estimated equivalent hydraulic conductivity and transmissivity may be useful for application to areas where a range of measured values is available for re-scaling.Maps of hydrogeologic information for some States are presented as examples in this report but maps and grids for all States are available electronically at the project Web site (USGS Glacial Aquifer System Groundwater Availability Study, http://mi.water.usgs.gov/projects/WaterSmart/Map-SIR2015-5105.html) and the Science Base Web site, https://www.sciencebase.gov/catalog/item/58756c7ee4b0a829a3276352.
NASA Astrophysics Data System (ADS)
Snyder, Noah P.; Castele, Michael R.; Wright, Jed R.
2009-02-01
The rivers of coastal Maine flow through mainstem lakes and long low-gradient reaches that break the continuum of bedload transport expected in nonparaglacial landscapes. Stream erosion of glacial deposits supplies coarse sediment to these systems. The land use history includes intensive timber harvest and associated dam construction, which may have altered the frequency of substrate-mobilizing events. These watersheds are vital habitat for the last remaining wild anadromous Atlantic salmon in the United States. Future adjustments in channel morphology and habitat quality (via natural stream processes or restoration projects) depend on erosion, transport, and deposition of coarse sediment. These factors motivate our study of competence at four sites in the Sheepscot and Narraguagus watersheds. Three of the four sites behaved roughly similarly, with particle entrainment during intervals that include winter ice and spring flood conditions, and relatively minor bed mobilization during moderate floods in the summer and fall (with a recurrence interval of 2-3 years). The fourth site, on the Sheepscot River mainstem, exhibits more vigorous entrainment of marked particles and more complex three-dimensional channel morphology. This contrast is partially due to local geomorphic conditions that favor high shear stresses (particularly relatively steep gradient), but also likely to nourishment of the bedload saltation system by recruitment from an eroding glacial deposit upstream. Our results suggest that the frequency and magnitude of bedload transport are reach specific, depending on factors including local channel geometry, upstream sediment supply and transport, and formation of anchor ice. This presents a challenge for stream practitioners in this region: different reaches may require contrasting management strategies. Our results underscore the importance of understanding channel processes at a given site and assessing conditions upstream and downstream as a prerequisite for conducting habitat restoration projects.
The combustion of sound and rotten coarse woody debris: a review
Joshua C. Hyde; Alistair M.S. Smith; Roger D. Ottmar; Ernesto C. Alvarado; Penelope Morgan
2011-01-01
Coarse woody debris serves many functions in forest ecosystem processes and has important implications for fire management as it affects air quality, soil heating and carbon budgets when it combusts. There is relatively little research evaluating the physical properties relating to the combustion of this coarse woody debris with even less specifically addressing...
ERIC Educational Resources Information Center
Wilson, Leslie; And Others
This evaluation project was designed to assess 37 persons (ages 21-72) who had moved from intermediate care facilities or skilled nursing facilities into innovative one-person or two-person community integrated living arrangements as a result of the Supported Placements in Integrated Community Environments project. The 37 persons had severe or…
ERIC Educational Resources Information Center
Gajek, Elzbieta
2018-01-01
Curriculum integration is one of the concepts which has been discussed for years. Telecollaborative projects, which employ elements of distance learning, provide opportunities for putting the idea into practice. Analysis of eTwinning projects undertaken in Polish schools aims at demonstrating the integrative role of distance learning approaches…
Biology of the Coarse Aerosol Mode: Insights Into Urban Aerosol Ecology
NASA Astrophysics Data System (ADS)
Dueker, E.; O'Mullan, G. D.; Montero, A.
2015-12-01
Microbial aerosols have been understudied, despite implications for climate studies, public health, and biogeochemical cycling. Because viable bacterial aerosols are often associated with coarse aerosol particles, our limited understanding of the coarse aerosol mode further impedes our ability to develop models of viable bacterial aerosol production, transport, and fate in the outdoor environment, particularly in crowded urban centers. To address this knowledge gap, we studied aerosol particle biology and size distributions in a broad range of urban and rural settings. Our previously published findings suggest a link between microbial viability and local production of coarse aerosols from waterways, waste treatment facilities, and terrestrial systems in urban and rural environments. Both in coastal Maine and in New York Harbor, coarse aerosols and viable bacterial aerosols increased with increasing wind speeds above 4 m s-1, a dynamic that was observed over time scales ranging from minutes to hours. At a New York City superfund-designated waterway regularly contaminated with raw sewage, aeration remediation efforts resulted in significant increases of coarse aerosols and bacterial aerosols above that waterway. Our current research indicates that bacterial communities in aerosols at this superfund site have a greater similarity to bacterial communities in the contaminated waterway with wind speeds above 4 m s-1. Size-fractionated sampling of viable microbial aerosols along the urban waterfront has also revealed significant shifts in bacterial aerosols, and specifically bacteria associated with coarse aerosols, when wind direction changes from onshore to offshore. This research highlights the key connections between bacterial aerosol viability and the coarse aerosol fraction, which is important in assessments of production, transport, and fate of bacterial contamination in the urban environment.
Iwagami, Sho; Onda, Yuichi; Tsujimura, Maki; Abe, Yutaka
2017-01-01
Radiocesium ( 137 Cs) migration from headwaters in forested areas provides important information, as the output from forest streams subsequently enters various land-use areas and downstream rivers. Thus, it is important to determine the composition of 137 Cs fluxes (dissolved fraction, suspended sediment, or coarse organic matter) that migrate through a headwater stream. In this study, the 137 Cs discharge by suspended sediment and coarse organic matter from a forest headwater catchment was monitored. The 137 Cs concentrations in suspended sediment and coarse organic matter, such as leaves and branches, and the amounts of suspended sediment and coarse organic matter were measured at stream sites in three headwater catchments in Yamakiya District, located ∼35 km northwest of Fukushima Dai-ichi Nuclear Power Plant (FDNPP) from August 2012 to September 2013, following the earthquake and tsunami disaster. Suspended sediment and coarse organic matter were sampled at intervals of approximately 1-2 months. The 137 Cs concentrations of suspended sediment and coarse organic matter were 2.4-49 kBq/kg and 0.85-14 kBq/kg, respectively. The 137 Cs concentrations of the suspended sediment were closely correlated with the average deposition density of the catchment. The annual proportions of contribution of 137 Cs discharge by suspended sediment, coarse organic matter, and dissolved fraction were 96-99%, 0.0092-0.069%, and 0.73-3.7%, respectively. The total annual 137 Cs discharge from the catchment was 0.02-0.3% of the deposition. Copyright © 2016 Elsevier Ltd. All rights reserved.
Center for Ground Vehicle Development and Integration
2011-04-22
UNCLASSIFIED OPSEC# 21798 CGVDI Organizational Chart CGVDI Director Project and Operations Management Project Management Operations Management Engineered...Metals Welding Assembly / Paint UNCLASSIFIED UNCLASSIFIED OPSEC# 21798 Project and Operations Management CGVDI serves as a single entry point to RDECOM...for ground vehicle system integration projects, as well as for managing cost, schedule, performance and risk. Project Management Operations
DOT National Transportation Integrated Search
2004-12-07
The project originally was granted funding from the earmark in an application dated June 1, 2000. A revised application received approval on May 19, 2003 to reflect a different proposed implementation of the project, while still achieving the project...
Mutually unbiased coarse-grained measurements of two or more phase-space variables
NASA Astrophysics Data System (ADS)
Paul, E. C.; Walborn, S. P.; Tasca, D. S.; Rudnicki, Łukasz
2018-05-01
Mutual unbiasedness of the eigenstates of phase-space operators—such as position and momentum, or their standard coarse-grained versions—exists only in the limiting case of infinite squeezing. In Phys. Rev. Lett. 120, 040403 (2018), 10.1103/PhysRevLett.120.040403, it was shown that mutual unbiasedness can be recovered for periodic coarse graining of these two operators. Here we investigate mutual unbiasedness of coarse-grained measurements for more than two phase-space variables. We show that mutual unbiasedness can be recovered between periodic coarse graining of any two nonparallel phase-space operators. We illustrate these results through optics experiments, using the fractional Fourier transform to prepare and measure mutually unbiased phase-space variables. The differences between two and three mutually unbiased measurements is discussed. Our results contribute to bridging the gap between continuous and discrete quantum mechanics, and they could be useful in quantum-information protocols.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Nan; Dimitrovski, Aleksandar D; Simunovic, Srdjan
2016-01-01
The development of high-performance computing techniques and platforms has provided many opportunities for real-time or even faster-than-real-time implementation of power system simulations. One approach uses the Parareal in time framework. The Parareal algorithm has shown promising theoretical simulation speedups by temporal decomposing a simulation run into a coarse simulation on the entire simulation interval and fine simulations on sequential sub-intervals linked through the coarse simulation. However, it has been found that the time cost of the coarse solver needs to be reduced to fully exploit the potentials of the Parareal algorithm. This paper studies a Parareal implementation using reduced generatormore » models for the coarse solver and reports the testing results on the IEEE 39-bus system and a 327-generator 2383-bus Polish system model.« less
Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.; ...
2017-11-26
Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills maturemore » pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.
Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills maturemore » pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less
Low-cost solar array project and Proceedings of the 15th Project Integration Meeting
NASA Technical Reports Server (NTRS)
1980-01-01
Progress made by the Low-Cost Solar Array Project during the period December 1979 to April 1980 is described. Project analysis and integration, technology development in silicon material, large area silicon sheet and encapsulation, production process and equipment development, engineering, and operation are included.
Multistage point relascope and randomized branch sampling for downed coarse woody debris estimation
Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine
2002-01-01
New sampling methods have recently been introduced that allow estimation of downed coarse woody debris using an angle gauge, or relascope. The theory behind these methods is based on sampling straight pieces of downed coarse woody debris. When pieces deviate from this ideal situation, auxillary methods must be employed. We describe a two-stage procedure where the...
The Detroit Exposure and Aerosol Research Study (DEARS) provided data to compare outdoor residential coarse particulate matter (PM10-2.5) concentrations in six different areas of Detroit with data from a central monitoring site. Daily and seasonal influences on the spa...
Travis W. Idol; Phillip E. Pope; Rebecca A. Figler; Felix Ponder Jr.
1999-01-01
Coarse woody debris is an important component influencing forest nutrient cycling and contributes to long-term soil productivity. The common practice of classifying coarse woody debris into different decomposition classes has seldom been related to the chemistry/biochemistry of the litter, which is the long term objective of our research. The objective of this...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, J E; Vassilevski, P S; Woodward, C S
This paper provides extensions of an element agglomeration AMG method to nonlinear elliptic problems discretized by the finite element method on general unstructured meshes. The method constructs coarse discretization spaces and corresponding coarse nonlinear operators as well as their Jacobians. We introduce both standard (fairly quasi-uniformly coarsened) and non-standard (coarsened away) coarse meshes and respective finite element spaces. We use both kind of spaces in FAS type coarse subspace correction (or Schwarz) algorithms. Their performance is illustrated on a number of model problems. The coarsened away spaces seem to perform better than the standard spaces for problems with nonlinearities inmore » the principal part of the elliptic operator.« less
Nonlinear evolution of coarse-grained quantum systems with generalized purity constraints
NASA Astrophysics Data System (ADS)
Burić, Nikola
2010-12-01
Constrained quantum dynamics is used to propose a nonlinear dynamical equation for pure states of a generalized coarse-grained system. The relevant constraint is given either by the generalized purity or by the generalized invariant fluctuation, and the coarse-grained pure states correspond to the generalized coherent, i.e. generalized nonentangled states. Open system model of the coarse-graining is discussed. It is shown that in this model and in the weak coupling limit the constrained dynamical equations coincide with an equation for pointer states, based on Hilbert-Schmidt distance, that was previously suggested in the context of the decoherence theory.
NONLINEAR MULTIGRID SOLVER EXPLOITING AMGe COARSE SPACES WITH APPROXIMATION PROPERTIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Max La Cour; Villa, Umberto E.; Engsig-Karup, Allan P.
The paper introduces a nonlinear multigrid solver for mixed nite element discretizations based on the Full Approximation Scheme (FAS) and element-based Algebraic Multigrid (AMGe). The main motivation to use FAS for unstruc- tured problems is the guaranteed approximation property of the AMGe coarse spaces that were developed recently at Lawrence Livermore National Laboratory. These give the ability to derive stable and accurate coarse nonlinear discretization problems. The previous attempts (including ones with the original AMGe method, [5, 11]), were less successful due to lack of such good approximation properties of the coarse spaces. With coarse spaces with approximation properties, ourmore » FAS approach on un- structured meshes should be as powerful/successful as FAS on geometrically re ned meshes. For comparison, Newton's method and Picard iterations with an inner state-of-the-art linear solver is compared to FAS on a nonlinear saddle point problem with applications to porous media ow. It is demonstrated that FAS is faster than Newton's method and Picard iterations for the experiments considered here. Due to the guaranteed approximation properties of our AMGe, the coarse spaces are very accurate, providing a solver with the potential for mesh-independent convergence on general unstructured meshes.« less
Bryophyte species associations with coarse woody debris and stand ages in Oregon
Rambo, T.; Muir, Patricia S.
1998-01-01
We quantified the relationships of 93 forest floor bryophyte species, including epiphytes from incorporated litterfall, to substrate and stand age in Pseudotsuga menziesii-Tsuga heterophylla stands at two sites in western Oregon. We used the method of Dufrêne and Legendre that combines a species' relative abundance and relative frequency, to calculate that species' importance in relation to environmental variables. The resulting "indicator value" describes a species' reliability for indicating the given environmental parameter. Thirty-nine species were indicative of either humus, a decay class of coarse woody debris, or stand age. Bryophyte community composition changed along the continuum of coarse woody debris decomposition from recently fallen trees with intact bark to forest floor humus. Richness of forest floor bryophytes will be enhanced when a full range of coarse woody debris decay classes is present. A suite of bryophytes indicated old-growth forest. These were mainly either epiphytes associated with older conifers or liverworts associated with coarse woody debris. Hardwood-associated epiphytes mainly indicated young stands. Mature conifers, hardwoods, and coarse woody debris are biological legacies that can be protected when thinning managed stands to foster habitat complexity and biodiversity, consistent with an ecosystem approach to forest management.
Coarse graining atomistic simulations of plastically deforming amorphous solids
NASA Astrophysics Data System (ADS)
Hinkle, Adam R.; Rycroft, Chris H.; Shields, Michael D.; Falk, Michael L.
2017-05-01
The primary mode of failure in disordered solids results from the formation and persistence of highly localized regions of large plastic strains known as shear bands. Continuum-level field theories capable of predicting this mechanical response rely upon an accurate representation of the initial and evolving states of the amorphous structure. We perform molecular dynamics simulations of a metallic glass and propose a methodology for coarse graining discrete, atomistic quantities, such as the potential energies of the elemental constituents. A strain criterion is established and used to distinguish the coarse-grained degrees-of-freedom inside the emerging shear band from those of the surrounding material. A signal-to-noise ratio provides a means of evaluating the strength of the signal of the shear band as a function of the coarse graining. Finally, we investigate the effect of different coarse graining length scales by comparing a two-dimensional, numerical implementation of the effective-temperature description in the shear transformation zone (STZ) theory with direct molecular dynamics simulations. These comparisons indicate the coarse graining length scale has a lower bound, above which there is a high level of agreement between the atomistics and the STZ theory, and below which the concept of effective temperature breaks down.
Zhang, Yuwei; Cao, Zexing; Zhang, John Zenghui; Xia, Fei
2017-02-27
Construction of coarse-grained (CG) models for large biomolecules used for multiscale simulations demands a rigorous definition of CG sites for them. Several coarse-graining methods such as the simulated annealing and steepest descent (SASD) based on the essential dynamics coarse-graining (ED-CG) or the stepwise local iterative optimization (SLIO) based on the fluctuation maximization coarse-graining (FM-CG), were developed to do it. However, the practical applications of these methods such as SASD based on ED-CG are subject to limitations because they are too expensive. In this work, we extend the applicability of ED-CG by combining it with the SLIO algorithm. A comprehensive comparison of optimized results and accuracy of various algorithms based on ED-CG show that SLIO is the fastest as well as the most accurate algorithm among them. ED-CG combined with SLIO could give converged results as the number of CG sites increases, which demonstrates that it is another efficient method for coarse-graining large biomolecules. The construction of CG sites for Ras protein by using MD fluctuations demonstrates that the CG sites derived from FM-CG can reflect the fluctuation properties of secondary structures in Ras accurately.
Brownian dynamics simulations of lipid bilayer membrane with hydrodynamic interactions in LAMMPS
NASA Astrophysics Data System (ADS)
Fu, Szu-Pei; Young, Yuan-Nan; Peng, Zhangli; Yuan, Hongyan
2016-11-01
Lipid bilayer membranes have been extensively studied by coarse-grained molecular dynamics simulations. Numerical efficiencies have been reported in the cases of aggressive coarse-graining, where several lipids are coarse-grained into a particle of size 4 6 nm so that there is only one particle in the thickness direction. Yuan et al. proposed a pair-potential between these one-particle-thick coarse-grained lipid particles to capture the mechanical properties of a lipid bilayer membrane (such as gel-fluid-gas phase transitions of lipids, diffusion, and bending rigidity). In this work we implement such interaction potential in LAMMPS to simulate large-scale lipid systems such as vesicles and red blood cells (RBCs). We also consider the effect of cytoskeleton on the lipid membrane dynamics as a model for red blood cell (RBC) dynamics, and incorporate coarse-grained water molecules to account for hydrodynamic interactions. The interaction between the coarse-grained water molecules (explicit solvent molecules) is modeled as a Lennard-Jones (L-J) potential. We focus on two sets of LAMMPS simulations: 1. Vesicle shape transitions with varying enclosed volume; 2. RBC shape transitions with different enclosed volume. This work is funded by NSF under Grant DMS-1222550.
Brownian dynamics simulations of lipid bilayer membrane with hydrodynamic interactions in LAMMPS
NASA Astrophysics Data System (ADS)
Fu, Szu-Pei; Young, Yuan-Nan; Peng, Zhangli; Yuan, Hongyan
Lipid bilayer membranes have been extensively studied by coarse-grained molecular dynamics simulations. Numerical efficiency has been reported in the cases of aggressive coarse-graining, where several lipids are coarse-grained into a particle of size 4 6 nm so that there is only one particle in the thickness direction. Yuan et al. proposed a pair-potential between these one-particle-thick coarse-grained lipid particles to capture the mechanical properties of a lipid bilayer membrane (such as gel-fluid-gas phase transitions of lipids, diffusion, and bending rigidity). In this work we implement such interaction potential in LAMMPS to simulate large-scale lipid systems such as vesicles and red blood cells (RBCs). We also consider the effect of cytoskeleton on the lipid membrane dynamics as a model for red blood cell (RBC) dynamics, and incorporate coarse-grained water molecules to account for hydrodynamic interactions. The interaction between the coarse-grained water molecules (explicit solvent molecules) is modeled as a Lennard-Jones (L-J) potential. We focus on two sets of LAMMPS simulations: 1. Vesicle shape transitions with varying enclosed volume; 2. RBC shape transitions with different enclosed volume.
Bard, Robert L.; Morishita, Masako; Dvonch, J. Timothy; Wang, Lu; Yang, Hui-yu; Spino, Catherine; Mukherjee, Bhramar; Kaplan, Mariana J.; Yalavarthi, Srilakshmi; Oral, Elif A.; Ajluni, Nevin; Sun, Qinghua; Harkema, Jack; Rajagopalan, Sanjay
2014-01-01
Background: Fine particulate matter (PM) air pollution is associated with numerous adverse health effects, including increased blood pressure (BP) and vascular dysfunction. Coarse PM substantially contributes to global air pollution, yet differs in characteristics from fine particles and is currently not regulated. However, the cardiovascular (CV) impacts of coarse PM exposure remain largely unknown. Objectives: Our goal was to elucidate whether coarse PM, like fine PM, is itself capable of eliciting adverse CV responses. Methods: We performed a randomized double-blind crossover study in which 32 healthy adults (25.9 ± 6.6 years of age) were exposed to concentrated ambient coarse particles (CAP; 76.2 ± 51.5 μg/m3) in a rural location and filtered air (FA) for 2 hr. We measured CV outcomes during, immediately after, and 2 hr postexposures. Results: Both systolic (mean difference = 0.32 mmHg; 95% CI: 0.05, 0.58; p = 0.021) and diastolic BP (0.27 mmHg; 95% CI: 0.003, 0.53; p = 0.05) linearly increased per 10 min of exposure during the inhalation of coarse CAP when compared with changes during FA exposure. Heart rate was on average higher (4.1 bpm; 95% CI: 3.06, 5.12; p < 0.0001) and the ratio of low-to-high frequency heart rate variability increased (0.24; 95% CI: 0.07, 0.41; p = 0.007) during coarse particle versus FA exposure. Other outcomes (brachial flow-mediated dilatation, microvascular reactive hyperemia index, aortic hemodynamics, pulse wave velocity) were not differentially altered by the exposures. Conclusions: Inhalation of coarse PM from a rural location is associated with a rapid elevation in BP and heart rate during exposure, likely due to the triggering of autonomic imbalance. These findings add mechanistic evidence supporting the biological plausibility that coarse particles could contribute to the triggering of acute CV events. Citation: Brook RD, Bard RL, Morishita M, Dvonch JT, Wang L, Yang HY, Spino C, Mukherjee B, Kaplan MJ, Yalavarthi S, Oral EA, Ajluni N, Sun Q, Brook JR, Harkema J, Rajagopalan S. 2014. Hemodynamic, autonomic, and vascular effects of exposure to coarse particulate matter air pollution from a rural location. Environ Health Perspect 122:624–630; http://dx.doi.org/10.1289/ehp.1306595 PMID:24618231
NASA Technical Reports Server (NTRS)
Hunthausen, Roger J.
1988-01-01
Recently completed projects in which advanced diagnostic concepts were explored and/or demonstrated are summarized. The projects begin with the design of integrated diagnostics for the Army's new gas turbine engines, and advance to the application of integrated diagnostics to other aircraft subsystems. Finally, a recent project is discussed which ties together subsystem fault monitoring and diagnostics with a more complete picture of flight domain knowledge.
Proceedings of the 22nd Project Integration Meeting
NASA Technical Reports Server (NTRS)
1983-01-01
This report describes progress made by the Flat-Plate Solar Array Project during the period January to September 1983. It includes reports on silicon sheet growth and characterization, module technology, silicon material, cell processing and high-efficiency cells, environmental isolation, engineering sciences, module performance and failure analysis and project analysis and integration. It includes a report on, and copies of visual presentations made at the 22nd Project Integration Meeting held at Pasadena, California, on September 28 and 29, 1983.
Site planning and integration fiscal year 1999 multi-year work plan (MYWP) update for WBS 1.8.2.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
SCHULTZ, E.A.
The primary mission of the Site Planning and Integration (SP and I) project is to assist Fluor Daniel Project Direction to ensure that all work performed under the Project Hanford Management Contract (PHMC) is adequately planned, executed, controlled, and that performance is measured and reported in an integrated fashion. Furthermore, SP and I is responsible for the development, implementation, and management of systems and processes that integrate technical, schedule, and cost baselines for PHMC work.
Gong, Hui; Xu, Dongli; Yuan, Jing; Li, Xiangning; Guo, Congdi; Peng, Jie; Li, Yuxin; Schwarz, Lindsay A.; Li, Anan; Hu, Bihe; Xiong, Benyi; Sun, Qingtao; Zhang, Yalun; Liu, Jiepeng; Zhong, Qiuyuan; Xu, Tonghui; Zeng, Shaoqun; Luo, Qingming
2016-01-01
The precise annotation and accurate identification of neural structures are prerequisites for studying mammalian brain function. The orientation of neurons and neural circuits is usually determined by mapping brain images to coarse axial-sampling planar reference atlases. However, individual differences at the cellular level likely lead to position errors and an inability to orient neural projections at single-cell resolution. Here, we present a high-throughput precision imaging method that can acquire a co-localized brain-wide data set of both fluorescent-labelled neurons and counterstained cell bodies at a voxel size of 0.32 × 0.32 × 2.0 μm in 3 days for a single mouse brain. We acquire mouse whole-brain imaging data sets of multiple types of neurons and projections with anatomical annotation at single-neuron resolution. The results show that the simultaneous acquisition of labelled neural structures and cytoarchitecture reference in the same brain greatly facilitates precise tracing of long-range projections and accurate locating of nuclei. PMID:27374071
DOT National Transportation Integrated Search
2012-05-01
The Vermont Integrated Land-Use and Transportation Carbon Estimator (VILTCE) project is part of a larger effort to develop environmental metrics related to travel, and to integrate these tools into a travel model under UVM TRC Signature Project No. 1...
Integrated Network Testbed for Energy Grid Research and Technology
Network Testbed for Energy Grid Research and Technology Experimentation Project Under the Integrated Network Testbed for Energy Grid Research and Technology Experimentation (INTEGRATE) project, NREL and partners completed five successful technology demonstrations at the ESIF. INTEGRATE is a $6.5-million, cost
Prototype development and demonstration for integrated dynamic transit operations.
DOT National Transportation Integrated Search
2016-01-01
This document serves as the Final Report specific to the Integrated Dynamic Transit Operations (IDTO) Prototype Development and Deployment Project, hereafter referred to as IDTO Prototype Deployment or IDTO PD project. This project was performed unde...
UAS-NAS Stakeholder Feedback Report
NASA Technical Reports Server (NTRS)
Randall, Debra; Murphy, Jim; Grindle, Laurie
2016-01-01
The need to fly UAS in the NAS to perform missions of vital importance to national security and defense, emergency management, science, and to enable commercial applications has been continually increasing over the past few years. To address this need, the NASA Aeronautics Research Mission Directorate (ARMD) Integrated Aviation Systems Program (IASP) formulated and funded the Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) Project (hereafter referred to as UAS-NAS Project) from 2011 to 2016. The UAS-NAS Project identified the following need statement: The UAS community needs routine access to the global airspace for all classes of UAS. The Project identified the following goal: To provide research findings to reduce technical barriers associated with integrating UAS into the NAS utilizing integrated system level tests in a relevant environment. This report provides a summary of the collaborations between the UAS-NAS Project and its primary stakeholders and how the Project applied and incorporated the feedback.
NASA Astrophysics Data System (ADS)
Felton, E. Anne
2002-10-01
Hypotheses advanced concerning the origin of the Pleistocene Hulopoe Gravel on Lanai include mega-tsunami, abandoned beach, 'multiple event,' rocky shoreline, and for parts of the deposit, Native Hawaiian constructions and degraded lava flow fronts. Uplift of Lanai shorelines has been suggested for deposits occurring up to at least 190 m. These conflicting hypotheses highlight problems with the interpretation of coarse gravel deposits containing marine biotic remains. The geological records of the processes implied by these hypotheses should look very different. Discrimination among these or any other hypotheses for the origins of the Hulopoe Gravel will require careful study of vertical and lateral variations in litho- and biofacies, facies architecture, contact relationships and stratal geometries of this deposit. Observations of modern rocky shorelines, particularly on Lanai adjacent to Hulopoe Gravel outcrops, have shown that distinctive coarse gravel facies are present, several of which occur in specific geomorphic settings. Tectonic, isostatic and eustatic changes which cause rapid shoreline translations on steep slopes favour preservation of former rocky shorelines and associated sedimentary deposits both above and below sea level. The sedimentary record of those shorelines is likely to be complex. The modern rocky shoreline sedimentary environment is a hostile one, largely neglected by sedimentologists. A range of high-energy processes characterize these shorelines. Long-period swell, tsunami and storm waves can erode hard bedrock and generate coarse gravel. They also erode older deposits, depositing fresh ones containing mixtures of materials of different ages. Additional gravelly material may be contributed by rivers draining steep hinterlands. To fully evaluate rocky shoreline deposition in the broadest sense, for both the Hulopoe Gravel and other deposits, sedimentary facies models are needed for rocky shorelines occurring in a range of settings. Recognition and description of rocky shoreline deposits are crucial for correctly interpreting the geological history of oceanic and volcanic arc islands, for distinguishing between ancient tsunami and storm deposits, and for interpreting coarse-grained deposits preserved on high energy coasts of continents. Problems include not only the absence of appropriate sedimentary facies models linking rocky shoreline deposits and environments but also, until recently, lack of a systematic descriptive scheme applicable to coarse gravel deposits generally. Two complementary methods serve to integrate the wide range of bed and clast attributes and parameters which characterize complex coarse gravel deposits. The composition and fabric (CAF) method has a materials focus, providing detailed description of attributes of the constituent clasts, petrology, the proportions of gravel, sand and mud, and the ways in which these materials are organized. The sedimentary facies model building (FMB) method emphasizes the organization of a deposit on a bed-by-bed basis to identify facies and infer depositional processes. The systematic use of a comprehensive gravel fabric and petrography log (GFPL), in conjunction with detailed vertical profiles, provides visual representations of a range of deposit characteristics. Criteria useful for distinguishing sedimentary facies in the Hulopoe Gravel are: grain-size modes, amount of matrix, bed geometry, sedimentary structures, bed fabric and clast roundness.
How Is Topographic Simplicity Maintained in Ephemeral, Dryland Channels?
NASA Astrophysics Data System (ADS)
Singer, M. B.; Michaelides, K.
2014-12-01
Topography in river channels reflects the time integral of streamflow-driven sediment flux mass balance. In dryland basins, infrequent and spatially heterogeneous rainfall generates a nonuniform sediment supply to ephemeral channels from hillslopes, and this sediment is subsequently sorted by spatially and temporally discontinuous channel flow. Paradoxically, the time integral of these interactions tends to produce simple topography, manifest in straight longitudinal profiles and symmetrical cross sections, which are distinct from bed morphology in perennial channels, but the controlling processes are unclear. We present a set of numerical modeling experiments based on field measurements and scenarios of uniform/nonuniform streamflow to investigate ephemeral channel bed-material flux and net sediment accumulation behavior in response to variations in channel hydrology, width, and grain size distribution. Coupled with variations in valley and channel width and frequent, yet discontinuous hillslope supply of coarse sediment, bed material becomes weakly sorted into coarse and fine sections that then affect rates of channel Qs. We identify three sediment transport thresholds relevant to poorly armored, dryland channels: 1) a low critical value required to entrain any grain sizes from the bed; 2) a value of ~4.5τ*c needed to move all grain sizes within a cross section with equal mobility; and 3) a value of ~50τ*c required to entrain gravel at nearly equivalent rates at all sections along a reach. The latter represents the 'geomorphically effective' event, which resets channel topography. We show that spatially variable flow below ~50τ*c creates and subsequently destroys incipient topography along ephemeral reaches and that large flood events above this threshold apparently dampen fluctuations in longitudinal sediment flux and thus smooth incipient channel bar forms. Both processes contribute to the maintenance of topographic simplicity in ephemeral dryland channels.
Greco, Cristina; Jiang, Ying; Chen, Jeff Z Y; Kremer, Kurt; Daoulas, Kostas Ch
2016-11-14
Self Consistent Field (SCF) theory serves as an efficient tool for studying mesoscale structure and thermodynamics of polymeric liquid crystals (LC). We investigate how some of the intrinsic approximations of SCF affect the description of the thermodynamics of polymeric LC, using a coarse-grained model. Polymer nematics are represented as discrete worm-like chains (WLC) where non-bonded interactions are defined combining an isotropic repulsive and an anisotropic attractive Maier-Saupe (MS) potential. The range of the potentials, σ, controls the strength of correlations due to non-bonded interactions. Increasing σ (which can be seen as an increase of coarse-graining) while preserving the integrated strength of the potentials reduces correlations. The model is studied with particle-based Monte Carlo (MC) simulations and SCF theory which uses partial enumeration to describe discrete WLC. In MC simulations the Helmholtz free energy is calculated as a function of strength of MS interactions to obtain reference thermodynamic data. To calculate the free energy of the nematic branch with respect to the disordered melt, we employ a special thermodynamic integration (TI) scheme invoking an external field to bypass the first-order isotropic-nematic transition. Methodological aspects which have not been discussed in earlier implementations of the TI to LC are considered. Special attention is given to the rotational Goldstone mode. The free-energy landscape in MC and SCF is directly compared. For moderate σ the differences highlight the importance of local non-bonded orientation correlations between segments, which SCF neglects. Simple renormalization of parameters in SCF cannot compensate the missing correlations. Increasing σ reduces correlations and SCF reproduces well the free energy in MC simulations.
Soil quality index for evaluation of reclaimed coal mine spoil.
Mukhopadhyay, S; Masto, R E; Yadav, A; George, J; Ram, L C; Shukla, S P
2016-01-15
Success in the remediation of mine spoil depends largely on the selection of appropriate tree species. The impacts of remediation on mine soil quality cannot be sufficiently assessed by individual soil properties. However, combination of soil properties into an integrated soil quality index provides a more holistic status of reclamation potentials of tree species. Remediation potentials of four tree species (Acacia auriculiformis, Cassia siamea, Dalbergia sissoo, and Leucaena leucocephala) were studied on reclaimed coal mine overburden dumps of Jharia coalfield, Dhanbad, India. Soil samples were collected under the canopies of the tree species. Comparative studies on the properties of soils in the reclaimed and the reference sites showed improvements in soil quality parameters of the reclaimed site: coarse fraction (-20.4%), bulk density (-12.8%), water holding capacity (+0.92%), pH (+25.4%), EC (+2.9%), cation exchange capacity (+46.6%), organic carbon (+91.5%), N (+60.6%), P (+113%), K (+19.9%), Ca (+49.6%), Mg (+12.2%), Na (+19.6%), S (+46.7%), total polycyclic aromatic hydrocarbons (-71.4%), dehydrogenase activity (+197%), and microbial biomass carbon (+115%). Principal component analysis (PCA) was used to identify key mine soil quality indicators to develop a soil quality index (SQI). Selected indicators include: coarse fraction, pH, EC, soil organic carbon, P, Ca, S, and dehydrogenase activity. The indicator values were converted into a unitless score (0-1.00) and integrated into SQI. The calculated SQI was significantly (P<0.001) correlated with tree biomass and canopy cover. Reclaimed site has 52-93% higher SQI compared to the reference site. Higher SQI values were obtained for sites reclaimed with D.sissoo (+93.1%) and C.siamea (+86.4%). Copyright © 2015 Elsevier B.V. All rights reserved.
Patrick A. Zollner; Kevin J. Crane
2003-01-01
We investigated relationships between canopy closure, shrub cover and the use of coarse woody debris as a travel path by eastern chipmunks (Tamias striatus) in the north central United States. Fine scale movements of chipmunks were followed with tracking spools and the percentage of each movement path directly along coarse woody debris was recorded...
Heidi J. Renninger; Nicholas Carlo; Kenneth L. Clark; Karina V.R. Schäfer
2014-01-01
Although snags and coarse woody debris are a small component of ecosystem respiration, disturbances can significantly increase the mass and respiration from these carbon (C) pools. The objectives of this study were to (1) measure respiration rates of snags and coarse woody debris throughout the year in a forest previously defoliated by gypsy moths, (2) develop models...
Christopher W. Woodall; Greg C. Liknes
2008-01-01
Coarse and fine woody debris are substantial forest ecosystem carbon stocks; however, there is a lack of understanding how these detrital carbon stocks vary across forested landscapes. Because forest woody detritus production and decay rates may partially depend on climatic conditions, the accumulation of coarse and fine woody debris carbon stocks in forests may be...
Improving DOE Project Performance Using the DOD Integrated Master Plan - 12481
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alleman, Glen B.; Nosbisch, Michael R.
2012-07-01
DOE O 413 measures a project's progress to plan by the consumption of funding, the passage of time, and the meeting of milestones. In March of 2003, then Under Secretary, Energy, Science, Card received a memo directing the implementation of Project Management and the Project Management Manual, including the Integrated Master Plan and Integrated Master Schedule. This directive states 'the integrated master plan and schedule tie together all project tasks by showing their logical relationships and any constraints controlling the start or finish of each task. This process results in a hierarchy of related functional and layered schedules derived frommore » the Work Breakdown Structure that can be used for monitoring and controlling project progress'. This paper shows how restoring the IMP/IMS paradigm to DOE program management increases the probability of program success in ways not currently available using DOD O 413 processes alone. Using DOE O 413 series guidance, adding the Integrated Master Plan and Integrated Master Schedule paradigm would provide a hierarchical set of performance measures for each 'package of work,' that provides measurable visibility to the increasing maturity of the project. This measurable maturity provides the mechanism to forecast future performance of cost, schedule, and technical outcomes in ways not available using just the activities in DOE O 413. With this information project managers have another tool available to address the issues identified in GAO-07-336 and GAO-09-406. (authors)« less
Flat Plate Solar Array Project: Proceedings of the 20th Project Integration Meeting
NASA Technical Reports Server (NTRS)
Mcdonald, R. R.
1982-01-01
Progress made by the Flat-Plate Solar Array Project during the period November 1981 to April 1982 is reported. Project analysis and integration, technology research in silicon material, large-area silicon sheet and environmental isolation, cell and module formation, engineering sciences, and module performance and failure analysis are covered.
Integrated care networks and quality of life: linking research and practice
Warner, Morton; Gould, Nicholas
2003-01-01
Abstract Purpose To report on the development of a project dedicated to improving the quality of life of older people through the creation of integrated networks. Context The project is set within a post-industrial community and against a backdrop of government re-organisation and devolution within Wales. The immediate research context is determined by utilising an approach to the structure of integration derived theoretically. Case description Project CHAIN (Community Health Alliances through Integrated Networks) adopts a network perspective as a means of addressing both the determinants of health and service delivery in health and social care. The Project partners are: healthcare commissioners and providers; local authority directorates including community services and transportation; the voluntary and private sectors; and a university institute. Co-opted participants include fora representing older people's interests. Data sources The Project incorporates an action research method. This paper highlights qualitative data elicited from interviews with health and social care managers and practitioners. Conclusions and discussion The Project is ongoing and we record progress in building five integrated networks. PMID:16896421
Collaborative project-based learning: an integrative science and technological education project
NASA Astrophysics Data System (ADS)
Baser, Derya; Ozden, M. Yasar; Karaarslan, Hasan
2017-04-01
Background: Blending collaborative learning and project-based learning (PBL) based on Wolff (2003) design categories, students interacted in a learning environment where they developed their technology integration practices as well as their technological and collaborative skills.
Developing integrated parametric planning models for budgeting and managing complex projects
NASA Technical Reports Server (NTRS)
Etnyre, Vance A.; Black, Ken U.
1988-01-01
The applicability of integrated parametric models for the budgeting and management of complex projects is investigated. Methods for building a very flexible, interactive prototype for a project planning system, and software resources available for this purpose, are discussed and evaluated. The prototype is required to be sensitive to changing objectives, changing target dates, changing costs relationships, and changing budget constraints. To achieve the integration of costs and project and task durations, parametric cost functions are defined by a process of trapezoidal segmentation, where the total cost for the project is the sum of the various project cost segments, and each project cost segment is the integral of a linearly segmented cost loading function over a specific interval. The cost can thus be expressed algebraically. The prototype was designed using Lotus-123 as the primary software tool. This prototype implements a methodology for interactive project scheduling that provides a model of a system that meets most of the goals for the first phase of the study and some of the goals for the second phase.
Integrated learning of mathematics, science and technology concepts through LEGO/Logo projects
NASA Astrophysics Data System (ADS)
Wu, Lina
This dissertation examined integrated learning in the domains of mathematics, science and technology based on Piaget's constructivism, Papert's constructionism, and project-based approach to education. Ten fifth grade students were involved in a two-month long after school program where they designed and built their own computer-controlled LEGO/Logo projects that required the use of gears, ratios and motion concepts. The design of this study centered on three notions of integrated learning: (1) integration in terms of what educational materials/settings provide, (2) integration in terms of students' use of those materials, and (3) integration in the psychological sense. In terms of the first notion, the results generally showed that the LEGO/Logo environment supported the integrated learning of math, science and technology concepts. Regarding the second notion, the students all completed impressive projects of their own design. They successfully combined gears, motors, and LEGO parts together to create motion and writing control commands to manipulate the motion. But contrary to my initial expectations, their successful designs did not require numerical reasoning about ratios in designing effective gear systems. When they did reason about gear relationships, they worked with "qualitative" ratios, e.g., "a larger driver gear with a smaller driven gear increases the speed." In terms of the third notion of integrated learning, there was evidence in all four case study students of the psychological processes involved in linking mathematical, scientific, and/or technological concepts together to achieve new conceptual units. The students not only made connections between ideas and experiences, but also recognized decisive patterns and relationships in their project work. The students with stronger overall project performances showed more evidence of synthesis than the students with relatively weaker performances did. The findings support the conclusion that all three notions of the integrated learning are important for understanding what the students learned from their project work. By considering these notions together, and by deliberating about their interrelations, we take a step towards understanding the integrated learning.
Integrated impacts of future electricity mix scenarios on select southeastern US water resources
NASA Astrophysics Data System (ADS)
Yates, D.; Meldrum, J.; Flores-Lopez, F.; Davis, Michelle
2013-09-01
Recent studies on the relationship between thermoelectric cooling and water resources have been made at coarse geographic resolution and do not adequately evaluate the localized water impacts on specific rivers and water bodies. We present the application of an integrated electricity generation-water resources planning model of the Apalachicola/Chattahoochee/Flint (ACF) and Alabama-Coosa-Tallapoosa (ACT) rivers based on the regional energy deployment system (ReEDS) and the water evaluation and planning (WEAP) system. A future scenario that includes a growing population and warmer, drier regional climate shows that benefits from a low-carbon, electricity fuel-mix could help maintain river temperatures below once-through coal-plants. These impacts are shown to be localized, as the cumulative impacts of different electric fuel-mix scenarios are muted in this relatively water-rich region, even in a warmer and drier future climate.
Comparison of different methods used in integral codes to model coagulation of aerosols
NASA Astrophysics Data System (ADS)
Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.
2013-09-01
The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.
Structural-functional integrated concrete with macro-encapsulated inorganic PCM
NASA Astrophysics Data System (ADS)
Mohseni, Ehsan; Tang, Waiching; Wang, Zhiyu
2017-09-01
Over the last few years the application of thermal energy storage system incorporating phase change materials (PCMs) to foster productivity and efficiency of buildings energy has grown rapidly. In this study, a structural-functional integrated concrete was developed using macro-encapsulated PCM-lightweight aggregate (LWA) as partial replacement (25 and 50% by volume) of coarse aggregate in control concrete. The PCM-LWA was prepared by incorporation of an inorganic PCM into porous LWAs through vacuum impregnation. The mechanical and thermal performance of PCM-LWA concrete were studied. The test results revealed that though the compressive strength of concrete with PCM-LWA was lower than the control concrete, but ranged from 22.02 MPa to 42.88 MPa which above the minimum strength requirement for structural application. The thermal performance test indicated that macro-encapsulated PCM-LWA has underwent the phase change transition reducing the indoor temperature.
NASA Astrophysics Data System (ADS)
Barnes, Brian C.; Leiter, Kenneth W.; Becker, Richard; Knap, Jaroslaw; Brennan, John K.
2017-07-01
We describe the development, accuracy, and efficiency of an automation package for molecular simulation, the large-scale atomic/molecular massively parallel simulator (LAMMPS) integrated materials engine (LIME). Heuristics and algorithms employed for equation of state (EOS) calculation using a particle-based model of a molecular crystal, hexahydro-1,3,5-trinitro-s-triazine (RDX), are described in detail. The simulation method for the particle-based model is energy-conserving dissipative particle dynamics, but the techniques used in LIME are generally applicable to molecular dynamics simulations with a variety of particle-based models. The newly created tool set is tested through use of its EOS data in plate impact and Taylor anvil impact continuum simulations of solid RDX. The coarse-grain model results from LIME provide an approach to bridge the scales from atomistic simulations to continuum simulations.
Risk Quantification of Systems Engineering Documents Improves Probability of DOD Project Success
2009-09-01
comprehensive risk model for DoD milestone review documentation as well as recommended changes to the Capability Maturity Model Integration ( CMMI ) Project...Milestone Documentation, Project Planning, Rational Frame, Political Frame, CMMI Project Planning Process Area, CMMI Risk Management Process Area...well as recommended changes to the Capability Maturity Model Integration ( CMMI ) Project Planning and Risk Management process areas. The intent is to
An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT
NASA Astrophysics Data System (ADS)
Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.
2009-06-01
A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.
NASA Downscaling Project: Final Report
NASA Technical Reports Server (NTRS)
Ferraro, Robert; Waliser, Duane; Peters-Lidard, Christa
2017-01-01
A team of researchers from NASA Ames Research Center, Goddard Space Flight Center, the Jet Propulsion Laboratory, and Marshall Space Flight Center, along with university partners at UCLA, conducted an investigation to explore whether downscaling coarse resolution global climate model (GCM) predictions might provide valid insights into the regional impacts sought by decision makers. Since the computational cost of running global models at high spatial resolution for any useful climate scale period is prohibitive, the hope for downscaling is that a coarse resolution GCM provides sufficiently accurate synoptic scale information for a regional climate model (RCM) to accurately develop fine scale features that represent the regional impacts of a changing climate. As a proxy for a prognostic climate forecast model, and so that ground truth in the form of satellite and in-situ observations could be used for evaluation, the MERRA and MERRA - 2 reanalyses were used to drive the NU - WRF regional climate model and a GEOS - 5 replay. This was performed at various resolutions that were at factors of 2 to 10 higher than the reanalysis forcing. A number of experiments were conducted that varied resolution, model parameterizations, and intermediate scale nudging, for simulations over the continental US during the period from 2000 - 2010. The results of these experiments were compared to observational datasets to evaluate the output.
Mapping wildfire burn severity in the Arctic Tundra from downsampled MODIS data
Kolden, Crystal A.; Rogan, John
2013-01-01
Wildfires are historically infrequent in the arctic tundra, but are projected to increase with climate warming. Fire effects on tundra ecosystems are poorly understood and difficult to quantify in a remote region where a short growing season severely limits ground data collection. Remote sensing has been widely utilized to characterize wildfire regimes, but primarily from the Landsat sensor, which has limited data acquisition in the Arctic. Here, coarse-resolution remotely sensed data are assessed as a means to quantify wildfire burn severity of the 2007 Anaktuvuk River Fire in Alaska, the largest tundra wildfire ever recorded on Alaska's North Slope. Data from Landsat Thematic Mapper (TM) and downsampled Moderate-resolution Imaging Spectroradiometer (MODIS) were processed to spectral indices and correlated to observed metrics of surface, subsurface, and comprehensive burn severity. Spectral indices were strongly correlated to surface severity (maximum R2 = 0.88) and slightly less strongly correlated to substrate severity. Downsampled MODIS data showed a decrease in severity one year post-fire, corroborating rapid vegetation regeneration observed on the burned site. These results indicate that widely-used spectral indices and downsampled coarse-resolution data provide a reasonable supplement to often-limited ground data collection for analysis and long-term monitoring of wildfire effects in arctic ecosystems.
NASA Astrophysics Data System (ADS)
Ramos, A.; Moreno, E.; Rubio, B.; Calas, H.; Galarza, N.; Rubio, J.; Diez, L.; Castellanos, L.; Gómez, T.
Some technical aspects of two Spanish cooperation projects, funded by DPI and Innpacto Programs of the R&D National Plan, are discussed. The objective is to analyze the common belief about than the ultrasonic testing in MHz range is not a tool utilizable to detect internal flaws in highly attenuating pieces made of coarse-grained steel. In fact high-strength steels, used in some safe industrial infrastructures of energy & transport sectors, are difficult to be inspected using the conventional "state of the art" in ultrasonic technology, due to their internal microstructures are very attenuating and coarse-grained. It is studied if this inspection difficulty could be overcome by finding intense interrogating pulses and advanced signal processing of the acquired echoes. A possible solution would depend on drastically improving signal-to-noise-ratios, by applying new advances on: ultrasonic transduction, HV electronics for intense pulsed driving of the testing probes, and an "ad-hoc" digital processing or focusing of the received noisy signals, in function of each material to be inspected. To attain this challenging aim on robust steel pieces would open the possibility of obtaining improvements in inspecting critical industrial components made of highly attenuating & dispersive materials, as new composites in aeronautic and motorway bridges, or new metallic alloys in nuclear area, where additional testing limitations often appear.
NASA Technical Reports Server (NTRS)
Ferraro, Robert; Waliser, Duane; Peters-Lidard, Christa
2017-01-01
A team of researchers from NASA Ames Research Center, Goddard Space Flight Center, the Jet Propulsion Laboratory, and Marshall Space Flight Center, along with university partners at UCLA, conducted an investigation to explore whether downscaling coarse resolution global climate model (GCM) predictions might provide valid insights into the regional impacts sought by decision makers. Since the computational cost of running global models at high spatial resolution for any useful climate scale period is prohibitive, the hope for downscaling is that a coarse resolution GCM provides sufficiently accurate synoptic scale information for a regional climate model (RCM) to accurately develop fine scale features that represent the regional impacts of a changing climate. As a proxy for a prognostic climate forecast model, and so that ground truth in the form of satellite and in-situ observations could be used for evaluation, the MERRA and MERRA-2 reanalyses were used to drive the NU-WRF regional climate model and a GEOS-5 replay. This was performed at various resolutions that were at factors of 2 to 10 higher than the reanalysis forcing. A number of experiments were conducted that varied resolution, model parameterizations, and intermediate scale nudging, for simulations over the continental US during the period from 2000-2010. The results of these experiments were compared to observational datasets to evaluate the output.
Kim, Haseog; Park, Sangki; Kim, Hayong
2016-07-29
There has been increased deconstruction and demolition of reinforced concrete structures due to the aging of the structures and redevelopment of urban areas resulting in the generation of massive amounts of construction. The production volume of waste concrete is projected to increase rapidly over 100 million tons by 2020. However, due to the high cement paste content, recycled aggregates have low density and high absorption ratio. They are mostly used for land reclamation purposes with low added value instead of multiple approaches. This study was performed to determine an effective method to remove cement paste from recycled aggregates by using the abrasion and substituting the process water with acidic water. The aim of this study is to analyze the quality of the recycled fine aggregates produced by a complex method and investigate the optimum manufacturing conditions for recycled fine aggregates based on the design of experiment. The experimental parameters considered were water ratio, coarse aggregate ratio, and abrasion time and, as a result of the experiment, data concerning the properties of recycled sand were obtained. It was found that high-quality recycled fine aggregates can be obtained with 8.57 min of abrasion-crusher time and a recycled coarse aggregate ratio of over 1.5.
A symplectic integration method for elastic filaments
NASA Astrophysics Data System (ADS)
Ladd, Tony; Misra, Gaurav
2009-03-01
Elastic rods are a ubiquitous coarse-grained model of semi-flexible biopolymers such as DNA, actin, and microtubules. The Worm-Like Chain (WLC) is the standard numerical model for semi-flexible polymers, but it is only a linearized approximation to the dynamics of an elastic rod, valid for small deflections; typically the torsional motion is neglected as well. In the standard finite-difference and finite-element formulations of an elastic rod, the continuum equations of motion are discretized in space and time, but it is then difficult to ensure that the Hamiltonian structure of the exact equations is preserved. Here we discretize the Hamiltonian itself, expressed as a line integral over the contour of the filament. This discrete representation of the continuum filament can then be integrated by one of the explicit symplectic integrators frequently used in molecular dynamics. The model systematically approximates the continuum partial differential equations, but has the same level of computational complexity as molecular dynamics and is constraint free. Numerical tests show that the algorithm is much more stable than a finite-difference formulation and can be used for high aspect ratio filaments, such as actin. We present numerical results for the deterministic and stochastic motion of single filaments.
Technical support plan for HANDI 2000 business management system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, D.E.
The Hanford Data Integration 2000 (HANDI 2000) Project will result in an integrated and comprehensive set of functional applications containing core information necessary to support the Project Hanford Management Contract. It is based on the Commercial-Off-The-Shelf (COTS) product solution with commercially proven business processes. The PassPort (PP) software is an integrated application for Accounts Payable, Contract Management, Inventory Management, and Purchasing. The PeopleSoft (PS) software is an integrated application for General Ledger, Project Costing, Human Resources, Payroll, Benefits, and Training. The implementation of this set of products, as the first deliverable of the HAND1 2000 Project, is referred to asmore » Business Management System (BMS) and Chemical Management.« less
I-15 integrated corridor management system : project management plan.
DOT National Transportation Integrated Search
2011-06-01
The Project Management Plan (PMP) assists the San Diego ICM Team by defining a procedural framework for management and control of the I-15 Integrated Corridor Management Demonstration Project, and development and deployment of the ICM System. The PMP...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vögele, Martin; Department of Theoretical Biophysics, Max Planck Institute of Biophysics, Frankfurt a. M.; Holm, Christian
2015-12-28
We present simulations of aqueous polyelectrolyte complexes with new MARTINI models for the charged polymers poly(styrene sulfonate) and poly(diallyldimethylammonium). Our coarse-grained polyelectrolyte models allow us to study large length and long time scales with regard to chemical details and thermodynamic properties. The results are compared to the outcomes of previous atomistic molecular dynamics simulations and verify that electrostatic properties are reproduced by our MARTINI coarse-grained approach with reasonable accuracy. Structural similarity between the atomistic and the coarse-grained results is indicated by a comparison between the pair radial distribution functions and the cumulative number of surrounding particles. Our coarse-grained models aremore » able to quantitatively reproduce previous findings like the correct charge compensation mechanism and a reduced dielectric constant of water. These results can be interpreted as the underlying reason for the stability of polyelectrolyte multilayers and complexes and validate the robustness of the proposed models.« less
Taniguchi, H
1985-11-01
Resolutions adopted by the 12th Annual Asian Parasite Control/Family Planning (APCO/FP) Conference held in Colombo, Sri Lanka urge the incorporation of quality of life issues of all dimensions in projects of all participating countries. 1 study discussed during the conference concerned health volunteers of the integrated project in Sri Lanka, which analyzes motivating factors which make community young people work on a voluntary basis. Another topic covered was the role of women in the achievement of primary health care. Video reports were presented by Bangladesh on family planning and parasite control activities, Brazil on utilization of existing organizations to improve successful integrated projects, China on making twin concerns of family planning and primary health care, Indonesia on strengthening urban FP/MCH clinics, Korea on health promotion through the integrated project, Malaysia on the NADI program, the Philippines on the Cebu model of integrated health care, and Thailand on fee charging urban programs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.
Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography.Our derivation, which is based on the rate-summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills mature pine trees.more » This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less
The Integration of Multi-State Clarus Data into Data Visualization Tools
DOT National Transportation Integrated Search
2011-12-20
This project focused on the integration of all Clarus Data into the Regional Integrated Transportation Information System (RITIS) for real-time situational awareness and historical safety data analysis. The initial outcomes of this project are the fu...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-26
... DEPARTMENT OF ENERGY Extension of Public Comment Period Hydrogen Energy California's Integrated Gasification Combined Cycle Project Preliminary Staff Assessment and Draft Environmental Impact Statement... California's Integrated Gasification Combined Cycle Project Preliminary Staff Assessment/Draft Environmental...
Scott Horn; James L. Hanula
2008-01-01
This study determined if short-term removal of coarse woody debris would reduce prey available to red-cockaded woodpeckers (Picoides borealis Vieillot) and other bark-foraging 1 birds at the Savannah River Site in Aiken and Barnwell counties, SC. All coarse woody debris was removed from four 9-ha plots of mature loblolly pine (Pinus taeda...
NASA Astrophysics Data System (ADS)
Markakis, Konstantinos; Valari, Myrto; Engardt, Magnuz; Lacressonniere, Gwendoline; Vautard, Robert; Andersson, Camilla
2016-02-01
Ozone, PM10 and PM2.5 concentrations over Paris, France and Stockholm, Sweden were modelled at 4 and 1 km horizontal resolutions respectively for the present and 2050 periods employing decade-long simulations. We account for large-scale global climate change (RCP-4.5) and fine-resolution bottom-up emission projections developed by local experts and quantify their impact on future pollutant concentrations. Moreover, we identify biases related to the implementation of regional-scale emission projections by comparing modelled pollutant concentrations between the fine- and coarse-scale simulations over the study areas. We show that over urban areas with major regional contribution (e.g. the city of Stockholm) the bias related to coarse-scale projections may be significant and lead to policy misclassification. Our results stress the need to better understand the mechanism of bias propagation across the modelling scales in order to design more successful local-scale strategies. We find that the impact of climate change is spatially homogeneous in both regions, implying strong regional influence. The climate benefit for ozone (daily mean and maximum) is up to -5 % for Paris and -2 % for Stockholm city. The climate benefit on PM2.5 and PM10 in Paris is between -5 and -10 %, while for Stockholm we estimate mixed trends of up to 3 % depending on season and size class. In Stockholm, emission mitigation leads to concentration reductions up to 15 % for daily mean and maximum ozone and 20 % for PM. Through a sensitivity analysis we show that this response is entirely due to changes in emissions at the regional scale. On the contrary, over the city of Paris (VOC-limited photochemical regime), local mitigation of NOx emissions increases future ozone concentrations due to ozone titration inhibition. This competing trend between the respective roles of emission and climate change, results in an increase in 2050 daily mean ozone by 2.5 % in Paris. Climate and not emission change appears to be the most influential factor for maximum ozone concentration over the city of Paris, which may be particularly interesting from a health impact perspective.
NASA Astrophysics Data System (ADS)
Pondell, C.; Kuehl, S. A.; Canuel, E. A.
2016-12-01
There are several methodologies used to determine chronologies for sediments deposited within the past 100 years, including 210Pb and 137Cs radioisotopes and organic and inorganic contaminants. These techniques are quite effective in fine sediments, which generally have a high affinity for metals and organic compounds. However, the application of these chronological tools becomes limited in systems where coarse sediments accumulate. Englebright Lake is an impoundment in northern California where sediment accumulation is characterized by a combination of fine and coarse sediments. This combination of sediment grain size complicated chronological analysis using the more traditional 137Cs chronological approach. This study established a chronology of these sediments using 239+240Pu isotopes. While most of the 249+240Pu activity was measured in the fine grain size fraction (<63 microns), up to 25% of the plutonium activity was detected in the coarse size fractions of sediments from Englebright Lake. Profiles of 239+240Pu were similar to available 137Cs profiles, verifying the application of plutonium isotopes for determining sediment chronologies and expanding the established geochronology for Englebright Lake sediments. This study of sediment accumulation in Englebright Lake demonstrates the application of plutonium isotopes in establishing chronologies in coarse sediments and highlights the potential for plutonium to offer new insights into patterns of coarse sediment accumulation.
NASA Astrophysics Data System (ADS)
Li, Y.; McDougall, T. J.
2016-02-01
Coarse resolution ocean models lack knowledge of spatial correlations between variables on scales smaller than the grid scale. Some researchers have shown that these spatial correlations play a role in the poleward heat flux. In order to evaluate the poleward transport induced by the spatial correlations at a fixed horizontal position, an equation is obtained to calculate the approximate transport from velocity gradients. The equation involves two terms that can be added to the quasi-Stokes streamfunction (based on temporal correlations) to incorporate the contribution of spatial correlations. Moreover, these new terms do not need to be parameterized and is ready to be evaluated by using model data directly. In this study, data from a high resolution ocean model have been used to estimate the accuracy of this HRM approach for improving the horizontal property fluxes in coarse-resolution ocean models. A coarse grid is formed by sub-sampling and box-car averaging the fine grid scale. The transport calculated on the coarse grid is then compared to the transport on original high resolution grid scale accumulated over a corresponding number of grid boxes. The preliminary results have shown that the estimate on coarse resolution grids roughly match the corresponding transports on high resolution grids.
Recycled Coarse Aggregate Produced by Pulsed Discharge in Water
NASA Astrophysics Data System (ADS)
Namihira, Takao; Shigeishi, Mitsuhiro; Nakashima, Kazuyuki; Murakami, Akira; Kuroki, Kaori; Kiyan, Tsuyoshi; Tomoda, Yuichi; Sakugawa, Takashi; Katsuki, Sunao; Akiyama, Hidenori; Ohtsu, Masayasu
In Japan, the recycling ratio of concrete scraps has been kept over 98 % after the Law for the Recycling of Construction Materials was enforced in 2000. In the present, most of concrete scraps were recycled as the Lower Subbase Course Material. On the other hand, it is predicted to be difficult to keep this higher recycling ratio in the near future because concrete scraps increase rapidly and would reach to over 3 times of present situation in 2010. In addition, the demand of concrete scraps as the Lower Subbase Course Material has been decreased. Therefore, new way to reuse concrete scraps must be developed. Concrete scraps normally consist of 70 % of coarse aggregate, 19 % of water and 11 % of cement. To obtain the higher recycling ratio, the higher recycling ratio of coarse aggregate is desired. In this paper, a new method for recycling coarse aggregate from concrete scraps has been developed and demonstrated. The system includes a Marx generator and a point to hemisphere mesh electrode immersed in water. In the demonstration, the test piece of concrete scrap was located between the electrodes and was treated by the pulsed discharge. After discharge treatment of test piece, the recycling coarse aggregates were evaluated under JIS and TS and had enough quality for utilization as the coarse aggregate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clifford, David J.; Harris, James M.
2014-12-01
This is the IDC Re-Engineering Phase 2 project Integrated Master Plan (IMP). The IMP presents the major accomplishments planned over time to re-engineer the IDC system. The IMP and the associate Integrated Master Schedule (IMS) are used for planning, scheduling, executing, and tracking the project technical work efforts. REVISIONS Version Date Author/Team Revision Description Authorized by V1.0 12/2014 IDC Re- engineering Project Team Initial delivery M. Harris
NASA Astrophysics Data System (ADS)
Zhang, H.; Harter, T.; Sivakumar, B.
2005-12-01
Facies-based geostatistical models have become important tools for the stochastic analysis of flow and transport processes in heterogeneous aquifers. However, little is known about the dependency of these processes on the parameters of facies- based geostatistical models. This study examines the nonpoint source solute transport normal to the major bedding plane in the presence of interconnected high conductivity (coarse- textured) facies in the aquifer medium and the dependence of the transport behavior upon the parameters of the constitutive facies model. A facies-based Markov chain geostatistical model is used to quantify the spatial variability of the aquifer system hydrostratigraphy. It is integrated with a groundwater flow model and a random walk particle transport model to estimate the solute travel time probability distribution functions (pdfs) for solute flux from the water table to the bottom boundary (production horizon) of the aquifer. The cases examined include, two-, three-, and four-facies models with horizontal to vertical facies mean length anisotropy ratios, ek, from 25:1 to 300:1, and with a wide range of facies volume proportions (e.g, from 5% to 95% coarse textured facies). Predictions of travel time pdfs are found to be significantly affected by the number of hydrostratigraphic facies identified in the aquifer, the proportions of coarse-textured sediments, the mean length of the facies (particularly the ratio of length to thickness of coarse materials), and - to a lesser degree - the juxtapositional preference among the hydrostratigraphic facies. In transport normal to the sedimentary bedding plane, travel time pdfs are not log- normally distributed as is often assumed. Also, macrodispersive behavior (variance of the travel time pdf) was found to not be a unique function of the conductivity variance. The skewness of the travel time pdf varied from negatively skewed to strongly positively skewed within the parameter range examined. We also show that the Markov chain approach may give significantly different travel time pdfs when compared to the more commonly used Gaussian random field approach even though the first and second order moments in the geostatistical distribution of the lnK field are identical. The choice of the appropriate geostatistical model is therefore critical in the assessment of nonpoint source transport.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Harter, Thomas; Sivakumar, Bellie
2006-06-01
Facies-based geostatistical models have become important tools for analyzing flow and mass transport processes in heterogeneous aquifers. Yet little is known about the relationship between these latter processes and the parameters of facies-based geostatistical models. In this study, we examine the transport of a nonpoint source solute normal (perpendicular) to the major bedding plane of an alluvial aquifer medium that contains multiple geologic facies, including interconnected, high-conductivity (coarse textured) facies. We also evaluate the dependence of the transport behavior on the parameters of the constitutive facies model. A facies-based Markov chain geostatistical model is used to quantify the spatial variability of the aquifer system's hydrostratigraphy. It is integrated with a groundwater flow model and a random walk particle transport model to estimate the solute traveltime probability density function (pdf) for solute flux from the water table to the bottom boundary (the production horizon) of the aquifer. The cases examined include two-, three-, and four-facies models, with mean length anisotropy ratios for horizontal to vertical facies, ek, from 25:1 to 300:1 and with a wide range of facies volume proportions (e.g., from 5 to 95% coarse-textured facies). Predictions of traveltime pdfs are found to be significantly affected by the number of hydrostratigraphic facies identified in the aquifer. Those predictions of traveltime pdfs also are affected by the proportions of coarse-textured sediments, the mean length of the facies (particularly the ratio of length to thickness of coarse materials), and, to a lesser degree, the juxtapositional preference among the hydrostratigraphic facies. In transport normal to the sedimentary bedding plane, traveltime is not lognormally distributed as is often assumed. Also, macrodispersive behavior (variance of the traveltime) is found not to be a unique function of the conductivity variance. For the parameter range examined, the third moment of the traveltime pdf varies from negatively skewed to strongly positively skewed. We also show that the Markov chain approach may give significantly different traveltime distributions when compared to the more commonly used Gaussian random field approach, even when the first- and second-order moments in the geostatistical distribution of the lnK field are identical. The choice of the appropriate geostatistical model is therefore critical in the assessment of nonpoint source transport, and uncertainty about that choice must be considered in evaluating the results.
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...
2017-09-14
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
An integrated knowledge system for wind tunnel testing - Project Engineers' Intelligent Assistant
NASA Technical Reports Server (NTRS)
Lo, Ching F.; Shi, George Z.; Hoyt, W. A.; Steinle, Frank W., Jr.
1993-01-01
The Project Engineers' Intelligent Assistant (PEIA) is an integrated knowledge system developed using artificial intelligence technology, including hypertext, expert systems, and dynamic user interfaces. This system integrates documents, engineering codes, databases, and knowledge from domain experts into an enriched hypermedia environment and was designed to assist project engineers in planning and conducting wind tunnel tests. PEIA is a modular system which consists of an intelligent user-interface, seven modules and an integrated tool facility. Hypermedia technology is discussed and the seven PEIA modules are described. System maintenance and updating is very easy due to the modular structure and the integrated tool facility provides user access to commercial software shells for documentation, reporting, or database updating. PEIA is expected to provide project engineers with technical information, increase efficiency and productivity, and provide a realistic tool for personnel training.
NASA Astrophysics Data System (ADS)
Voytishek, Anton V.; Shipilov, Nikolay M.
2017-11-01
In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.
Ruff, Kiersten M.; Harmon, Tyler S.; Pappu, Rohit V.
2015-01-01
We report the development and deployment of a coarse-graining method that is well suited for computer simulations of aggregation and phase separation of protein sequences with block-copolymeric architectures. Our algorithm, named CAMELOT for Coarse-grained simulations Aided by MachinE Learning Optimization and Training, leverages information from converged all atom simulations that is used to determine a suitable resolution and parameterize the coarse-grained model. To parameterize a system-specific coarse-grained model, we use a combination of Boltzmann inversion, non-linear regression, and a Gaussian process Bayesian optimization approach. The accuracy of the coarse-grained model is demonstrated through direct comparisons to results from all atom simulations. We demonstrate the utility of our coarse-graining approach using the block-copolymeric sequence from the exon 1 encoded sequence of the huntingtin protein. This sequence comprises of 17 residues from the N-terminal end of huntingtin (N17) followed by a polyglutamine (polyQ) tract. Simulations based on the CAMELOT approach are used to show that the adsorption and unfolding of the wild type N17 and its sequence variants on the surface of polyQ tracts engender a patchy colloid like architecture that promotes the formation of linear aggregates. These results provide a plausible explanation for experimental observations, which show that N17 accelerates the formation of linear aggregates in block-copolymeric N17-polyQ sequences. The CAMELOT approach is versatile and is generalizable for simulating the aggregation and phase behavior of a range of block-copolymeric protein sequences. PMID:26723608
NASA Astrophysics Data System (ADS)
Murray, A. Brad; Thieler, E. Robert
2004-02-01
Recent observations of inner continental shelves in many regions show numerous collections of relatively coarse sediment, which extend kilometers in the cross-shore direction and are on the order of 100 m wide. These "rippled scour depressions" have been interpreted to indicate concentrated cross-shelf currents. However, recent observations strongly suggest that they are associated with sediment transport along-shore rather than cross-shore. A new hypothesis for the origin of these features involves the large wave-generated ripples that form in the coarse material. Wave motions interacting with these large roughness elements generate near-bed turbulence that is greatly enhanced relative to that in other areas. This enhances entrainment and inhibits settling of fine material in an area dominated by coarse sediment. The fine sediment is then carried by mean currents past the coarse accumulations, and deposited where the bed is finer. We hypothesize that these interactions constitute a feedback tending to produce accumulations of fine material separated by self-perpetuating patches of coarse sediments. As with many types of self-organized bedforms, small features would interact as they migrate, leading to a better-organized, larger-scale pattern. As an initial test of this hypothesis, we use a numerical model treating the transport of coarse and fine sediment fractions, treated as functions of the local bed composition—a proxy for the presence of large roughness elements in coarse areas. Large-scale sorted patterns exhibiting the main characteristics of the natural features result robustly in the model, indicating that this new hypothesis offers a plausible explanation for the phenomena.