Sample records for regime algorithm results

  1. An algorithm for engineering regime shifts in one-dimensional dynamical systems

    NASA Astrophysics Data System (ADS)

    Tan, James P. L.

    2018-01-01

    Regime shifts are discontinuous transitions between stable attractors hosting a system. They can occur as a result of a loss of stability in an attractor as a bifurcation is approached. In this work, we consider one-dimensional dynamical systems where attractors are stable equilibrium points. Relying on critical slowing down signals related to the stability of an equilibrium point, we present an algorithm for engineering regime shifts such that a system may escape an undesirable attractor into a desirable one. We test the algorithm on synthetic data from a one-dimensional dynamical system with a multitude of stable equilibrium points and also on a model of the population dynamics of spruce budworms in a forest. The algorithm and other ideas discussed here contribute to an important part of the literature on exercising greater control over the sometimes unpredictable nature of nonlinear systems.

  2. Dominant takeover regimes for genetic algorithms

    NASA Technical Reports Server (NTRS)

    Noever, David; Baskaran, Subbiah

    1995-01-01

    The genetic algorithm (GA) is a machine-based optimization routine which connects evolutionary learning to natural genetic laws. The present work addresses the problem of obtaining the dominant takeover regimes in the GA dynamics. Estimated GA run times are computed for slow and fast convergence in the limits of high and low fitness ratios. Using Euler's device for obtaining partial sums in closed forms, the result relaxes the previously held requirements for long time limits. Analytical solution reveal that appropriately accelerated regimes can mark the ascendancy of the most fit solution. In virtually all cases, the weak (logarithmic) dependence of convergence time on problem size demonstrates the potential for the GA to solve large N-P complete problems.

  3. A multi-dimensional nonlinearly implicit, electromagnetic Vlasov-Darwin particle-in-cell (PIC) algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Chacón, Luis; CoCoMans Team

    2014-10-01

    For decades, the Vlasov-Darwin model has been recognized to be attractive for PIC simulations (to avoid radiative noise issues) in non-radiative electromagnetic regimes. However, the Darwin model results in elliptic field equations that renders explicit time integration unconditionally unstable. Improving on linearly implicit schemes, fully implicit PIC algorithms for both electrostatic and electromagnetic regimes, with exact discrete energy and charge conservation properties, have been recently developed in 1D. This study builds on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the particle-field equations in multiple dimensions. The algorithm conserves energy, charge, and canonical-momentum exactly, even with grid packing. A simple fluid preconditioner allows efficient use of large timesteps, O (√{mi/me}c/veT) larger than the explicit CFL. We demonstrate the accuracy and efficiency properties of the of the algorithm with various numerical experiments in 2D3V.

  4. GPU-accelerated computing for Lagrangian coherent structures of multi-body gravitational regimes

    NASA Astrophysics Data System (ADS)

    Lin, Mingpei; Xu, Ming; Fu, Xiaoyu

    2017-04-01

    Based on a well-established theoretical foundation, Lagrangian Coherent Structures (LCSs) have elicited widespread research on the intrinsic structures of dynamical systems in many fields, including the field of astrodynamics. Although the application of LCSs in dynamical problems seems straightforward theoretically, its associated computational cost is prohibitive. We propose a block decomposition algorithm developed on Compute Unified Device Architecture (CUDA) platform for the computation of the LCSs of multi-body gravitational regimes. In order to take advantage of GPU's outstanding computing properties, such as Shared Memory, Constant Memory, and Zero-Copy, the algorithm utilizes a block decomposition strategy to facilitate computation of finite-time Lyapunov exponent (FTLE) fields of arbitrary size and timespan. Simulation results demonstrate that this GPU-based algorithm can satisfy double-precision accuracy requirements and greatly decrease the time needed to calculate final results, increasing speed by approximately 13 times. Additionally, this algorithm can be generalized to various large-scale computing problems, such as particle filters, constellation design, and Monte-Carlo simulation.

  5. Optimal Budget Allocation for Sample Average Approximation

    DTIC Science & Technology

    2011-06-01

    an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample

  6. Implicit gas-kinetic unified algorithm based on multi-block docking grid for multi-body reentry flows covering all flow regimes

    NASA Astrophysics Data System (ADS)

    Peng, Ao-Ping; Li, Zhi-Hui; Wu, Jun-Lin; Jiang, Xin-Yu

    2016-12-01

    Based on the previous researches of the Gas-Kinetic Unified Algorithm (GKUA) for flows from highly rarefied free-molecule transition to continuum, a new implicit scheme of cell-centered finite volume method is presented for directly solving the unified Boltzmann model equation covering various flow regimes. In view of the difficulty in generating the single-block grid system with high quality for complex irregular bodies, a multi-block docking grid generation method is designed on the basis of data transmission between blocks, and the data structure is constructed for processing arbitrary connection relations between blocks with high efficiency and reliability. As a result, the gas-kinetic unified algorithm with the implicit scheme and multi-block docking grid has been firstly established and used to solve the reentry flow problems around the multi-bodies covering all flow regimes with the whole range of Knudsen numbers from 10 to 3.7E-6. The implicit and explicit schemes are applied to computing and analyzing the supersonic flows in near-continuum and continuum regimes around a circular cylinder with careful comparison each other. It is shown that the present algorithm and modelling possess much higher computational efficiency and faster converging properties. The flow problems including two and three side-by-side cylinders are simulated from highly rarefied to near-continuum flow regimes, and the present computed results are found in good agreement with the related DSMC simulation and theoretical analysis solutions, which verify the good accuracy and reliability of the present method. It is observed that the spacing of the multi-body is smaller, the cylindrical throat obstruction is greater with the flow field of single-body asymmetrical more obviously and the normal force coefficient bigger. While in the near-continuum transitional flow regime of near-space flying surroundings, the spacing of the multi-body increases to six times of the diameter of the single-body, the interference effects of the multi-bodies tend to be negligible. The computing practice has confirmed that it is feasible for the present method to compute the aerodynamics and reveal flow mechanism around complex multi-body vehicles covering all flow regimes from the gas-kinetic point of view of solving the unified Boltzmann model velocity distribution function equation.

  7. Improved adaptive genetic algorithm with sparsity constraint applied to thermal neutron CT reconstruction of two-phase flow

    NASA Astrophysics Data System (ADS)

    Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang

    2018-05-01

    Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.

  8. Automation of a high risk medication regime algorithm in a home health care population.

    PubMed

    Olson, Catherine H; Dierich, Mary; Westra, Bonnie L

    2014-10-01

    Create an automated algorithm for predicting elderly patients' medication-related risks for readmission and validate it by comparing results with a manual analysis of the same patient population. Outcome and Assessment Information Set (OASIS) and medication data were reused from a previous, manual study of 911 patients from 15 Medicare-certified home health care agencies. The medication data was converted into standardized drug codes using APIs managed by the National Library of Medicine (NLM), and then integrated in an automated algorithm that calculates patients' high risk medication regime scores (HRMRs). A comparison of the results between algorithm and manual process was conducted to determine how frequently the HRMR scores were derived which are predictive of readmission. HRMR scores are composed of polypharmacy (number of drugs), Potentially Inappropriate Medications (PIM) (drugs risky to the elderly), and Medication Regimen Complexity Index (MRCI) (complex dose forms, instructions or administration). The algorithm produced polypharmacy, PIM, and MRCI scores that matched with 99%, 87% and 99% of the scores, respectively, from the manual analysis. Imperfect match rates resulted from discrepancies in how drugs were classified and coded by the manual analysis vs. the automated algorithm. HRMR rules lack clarity, resulting in clinical judgments for manual coding that were difficult to replicate in the automated analysis. The high comparison rates for the three measures suggest that an automated clinical tool could use patients' medication records to predict their risks of avoidable readmissions. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Segmentation algorithm for non-stationary compound Poisson processes. With an application to inventory time series of market members in a financial market

    NASA Astrophysics Data System (ADS)

    Tóth, B.; Lillo, F.; Farmer, J. D.

    2010-11-01

    We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.

  10. Enhanced Trajectory Based Similarity Prediction with Uncertainty Quantification

    DTIC Science & Technology

    2014-10-02

    challenge by obtaining the highest score by using a data-driven prognostics method to predict the RUL of a turbofan engine (Saxena & Goebel, PHM08...process for multi-regime health assessment. To illustrate multi-regime partitioning, the “ Turbofan Engine Degradation simulation” data set from...hence the name k- means. Figure 3 shows the results of the k-means clustering algorithm on the “ Turbofan Engine Degradation simulation” data set. As

  11. Fractal dimension and fuzzy logic systems for broken rotor bar detection in induction motors at start-up and steady-state regimes

    NASA Astrophysics Data System (ADS)

    Amezquita-Sanchez, Juan P.; Valtierra-Rodriguez, Martin; Perez-Ramirez, Carlos A.; Camarena-Martinez, David; Garcia-Perez, Arturo; Romero-Troncoso, Rene J.

    2017-07-01

    Squirrel-cage induction motors (SCIMs) are key machines in many industrial applications. In this regard, the monitoring of their operating condition aiming at avoiding damage and reducing economical losses has become a demanding task for industry. In the literature, several techniques and methodologies to detect faults that affect the integrity and performance of SCIMs have been proposed. However, they have only been focused on analyzing either the start-up transient or the steady-state operation regimes, two common operating scenarios in real practice. In this work, a novel methodology for broken rotor bar (BRB) detection in SCIMs during both start-up and steady-state operation regimes is proposed. It consists of two main steps. In the first one, the analysis of three-axis vibration signals using fractal dimension (FD) theory is carried out. Since different FD-based algorithms can give different results, three algorithms named Katz’ FD, Higuchi’s FD, and box dimension, are tested. In the second step, a fuzzy logic system for each regime is presented for automatic diagnosis. To validate the proposal, a motor with different damage levels has been tested: one with a partially BRB, a second with one fully BRB, and the third with two BRBs. The obtained results demonstrate the proposed effectiveness.

  12. Resolving boosted jets with XCone

    DOE PAGES

    Thaler, Jesse; Wilkason, Thomas F.

    2015-12-01

    We show how the recently proposed XCone jet algorithm smoothly interpolates between resolved and boosted kinematics. When using standard jet algorithms to reconstruct the decays of hadronic resonances like top quarks and Higgs bosons, one typically needs separate analysis strategies to handle the resolved regime of well-separated jets and the boosted regime of fat jets with substructure. XCone, by contrast, is an exclusive cone jet algorithm that always returns a fixed number of jets, so jet regions remain resolved even when (sub)jets are overlapping in the boosted regime. In this paper, we perform three LHC case studies $-$ dijet resonances,more » Higgs decays to bottom quarks, and all-hadronic top pairs$-$ that demonstrate the physics applications of XCone over a wide kinematic range.« less

  13. Algorithm and Software for Calculating Optimal Regimes of the Process Water Supply System at the Kalininskaya NPP{sup 1}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murav’ev, V. P., E-mail: murval@mail.ru; Kochetkov, A. V.; Glazova, E. G.

    An algorithm and software for calculating the optimal operating regimes of the process water supply system at the Kalininskaya NPP are described. The parameters of the optimal regimes are determined for time varying meteorological conditions and condensation loads of the NPP. The optimal flow of the cooling water in the turbines is determined computationally; a regime map with the data on the optimal water consumption distribution between the coolers and displaying the regimes with an admissible heat load on the natural cooling lakes is composed. Optimizing the cooling system for a 4000-MW NPP will make it possible to conserve atmore » least 155,000 MW · h of electricity per year. The procedure developed can be used to optimize the process water supply systems of nuclear and thermal power plants.« less

  14. Development of the One-Sided Nonlinear Adaptive Doppler Shift Estimation

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Koch, Grady J.; Singh, Upendra N.; Kavaya, Michael J.; Serror, Judith A.

    2009-01-01

    The new development of a one-sided nonlinear adaptive shift estimation technique (NADSET) is introduced. The background of the algorithm and a brief overview of NADSET are presented. The new technique is applied to the wind parameter estimates from a 2-micron wavelength coherent Doppler lidar system called VALIDAR located in NASA Langley Research Center in Virginia. The new technique enhances wind parameters such as Doppler shift and power estimates in low Signal-To-Noise-Ratio (SNR) regimes using the estimates in high SNR regimes as the algorithm scans the range bins from low to high altitude. The original NADSET utilizes the statistics in both the lower and the higher range bins to refine the wind parameter estimates in between. The results of the two different approaches of NADSET are compared.

  15. MUSIC imaging method for electromagnetic inspection of composite multi-layers

    NASA Astrophysics Data System (ADS)

    Rodeghiero, Giacomo; Ding, Ping-Ping; Zhong, Yu; Lambert, Marc; Lesselier, Dominique

    2015-03-01

    A first-order asymptotic formulation of the electric field scattered by a small inclusion (with respect to the wavelength in dielectric regime or to the skin depth in conductive regime) embedded in composite material is given. It is validated by comparison with results obtained using a Method of Moments (MoM). A non-iterative MUltiple SIgnal Classification (MUSIC) imaging method is utilized in the same configuration to locate the position of small defects. The effectiveness of the imaging algorithm is illustrated through some numerical examples.

  16. Simulated Radar Characteristics of LBA Convective Systems: Easterly and Westerly Regimes

    NASA Technical Reports Server (NTRS)

    Lang, Stephen E.; Tao, Wei-Kuo; Simpson, Joanne

    2003-01-01

    The 3D Goddard Cumulus Ensemble (GCE) model was used to simulate convection that occurred during the TRMM LBA field experiment in Brazil. Convection in this region can be categorized into two different regimes. Low-level easterly flow results in moderate to high CAPE and a drier environment. Convection is more intense like that seen over continents. Low-level westerly flow results in low CAPE and a moist environment. Convection is weaker and more widespread characteristic of oceanic or monsoon-like systems. The GCE model has been used to study both regimes n order to provide cloud datasets that are representative of both environments in support of TRMM rainfall and heating algorithm development. Two different cases are analyzed: Jan 26, 1999, an eastely regime case, and Feb 23, 1999, a westerly regime case. The Jan 26 case is an organized squall line, while the Feb 23 case is less organized with only transient lines. Radar signatures, including CFADs, from the two simulated cases are compared to each other and with observations. The microphysical processes simulated in the model are also compared between the two cases.

  17. Supercontinuum optimization for dual-soliton based light sources using genetic algorithms in a grid platform.

    PubMed

    Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A

    2014-09-22

    We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.

  18. Optimal Pitch Thrust-Vector Angle and Benefits for all Flight Regimes

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn B.; Bolonkin, Alexander

    2000-01-01

    The NASA Dryden Flight Research Center is exploring the optimum thrust-vector angle on aircraft. Simple aerodynamic performance models for various phases of aircraft flight are developed and optimization equations and algorithms are presented in this report. Results of optimal angles of thrust vectors and associated benefits for various flight regimes of aircraft (takeoff, climb, cruise, descent, final approach, and landing) are given. Results for a typical wide-body transport aircraft are also given. The benefits accruable for this class of aircraft are small, but the technique can be applied to other conventionally configured aircraft. The lower L/D aerodynamic characteristics of fighters generally would produce larger benefits than those produced for transport aircraft.

  19. Tracks detection from high-orbit space objects

    NASA Astrophysics Data System (ADS)

    Shumilov, Yu. P.; Vygon, V. G.; Grishin, E. A.; Konoplev, A. O.; Semichev, O. P.; Shargorodskii, V. D.

    2017-05-01

    The paper presents studies results of a complex algorithm for the detection of highly orbital space objects. Before the implementation of the algorithm, a series of frames with weak tracks of space objects, which can be discrete, is recorded. The algorithm includes pre-processing, classical for astronomy, consistent filtering of each frame and its threshold processing, shear transformation, median filtering of the transformed series of frames, repeated threshold processing and detection decision making. Modeling of space objects weak tracks on of the night starry sky real frames obtained in the regime of a stationary telescope was carried out. It is shown that the permeability of an optoelectronic device has increased by almost 2m.

  20. Parameter estimates in binary black hole collisions using neural networks

    NASA Astrophysics Data System (ADS)

    Carrillo, M.; Gracia-Linares, M.; González, J. A.; Guzmán, F. S.

    2016-10-01

    We present an algorithm based on artificial neural networks (ANNs), that estimates the mass ratio in a binary black hole collision out of given gravitational wave (GW) strains. In this analysis, the ANN is trained with a sample of GW signals generated with numerical simulations. The effectiveness of the algorithm is evaluated with GWs generated also with simulations for given mass ratios unknown to the ANN. We measure the accuracy of the algorithm in the interpolation and extrapolation regimes. We present the results for noise free signals and signals contaminated with Gaussian noise, in order to foresee the dependence of the method accuracy in terms of the signal to noise ratio.

  1. A solution algorithm for fluid–particle flows across all flow regimes

    DOE PAGES

    Kong, Bo; Fox, Rodney O.

    2017-05-12

    Many fluid–particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are closepacked as well as very dilute regions where particle–particle collisions are rare. Thus, in order to simulate such fluid–particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in themore » flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas–particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid–particle flows.« less

  2. A solution algorithm for fluid-particle flows across all flow regimes

    NASA Astrophysics Data System (ADS)

    Kong, Bo; Fox, Rodney O.

    2017-09-01

    Many fluid-particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are close-packed as well as very dilute regions where particle-particle collisions are rare. Thus, in order to simulate such fluid-particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in the flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas-particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid-particle flows.

  3. A solution algorithm for fluid–particle flows across all flow regimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kong, Bo; Fox, Rodney O.

    Many fluid–particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are closepacked as well as very dilute regions where particle–particle collisions are rare. Thus, in order to simulate such fluid–particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in themore » flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas–particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid–particle flows.« less

  4. Land degradation and property regimes

    Treesearch

    Paul M. Beaumont; Robert T. Walker

    1996-01-01

    This paper addresses the relationship between property regimes and land degradation outcomes, in the context of peasant agriculture. We consider explicitly whether private property provides for superior soil resource conservation, as compared to common property and open access. To assess this we implement optimization algorithms on a supercomputer to address resource...

  5. The WACMOS-ET project – Part 1: Tower-scale evaluation of four remote-sensing-based evapotranspiration algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michel, D.; Jimenez, C.; Miralles, D. G.

    The WAter Cycle Multi-mission Observation Strategy – EvapoTranspiration (WACMOS-ET) project has compiled a forcing data set covering the period 2005–2007 that aims to maximize the exploitation of European Earth Observations data sets for evapotranspiration (ET) estimation. The data set was used to run four established ET algorithms: the Priestley–Taylor Jet Propulsion Laboratory model (PT-JPL), the Penman–Monteith algorithm from the MODerate resolution Imaging Spectroradiometer (MODIS) evaporation product (PM-MOD), the Surface Energy Balance System (SEBS) and the Global Land Evaporation Amsterdam Model (GLEAM). In addition, in situ meteorological data from 24 FLUXNET towers were used to force the models, with results from both forcing sets compared tomore » tower-based flux observations. Model performance was assessed on several timescales using both sub-daily and daily forcings. The PT-JPL model and GLEAM provide the best performance for both satellite- and tower-based forcing as well as for the considered temporal resolutions. Simulations using the PM-MOD were mostly underestimated, while the SEBS performance was characterized by a systematic overestimation. In general, all four algorithms produce the best results in wet and moderately wet climate regimes. In dry regimes, the correlation and the absolute agreement with the reference tower ET observations were consistently lower. While ET derived with in situ forcing data agrees best with the tower measurements ( R 2 = 0.67), the agreement of the satellite-based ET estimates is only marginally lower ( R 2 = 0.58). Results also show similar model performance at daily and sub-daily (3-hourly) resolutions. Overall, our validation experiments against in situ measurements indicate that there is no single best-performing algorithm across all biome and forcing types. In conclusion, an extension of the evaluation to a larger selection of 85 towers (model inputs resampled to a common grid to facilitate global estimates) confirmed the original findings.« less

  6. The WACMOS-ET project – Part 1: Tower-scale evaluation of four remote-sensing-based evapotranspiration algorithms

    DOE PAGES

    Michel, D.; Jimenez, C.; Miralles, D. G.; ...

    2016-02-23

    The WAter Cycle Multi-mission Observation Strategy – EvapoTranspiration (WACMOS-ET) project has compiled a forcing data set covering the period 2005–2007 that aims to maximize the exploitation of European Earth Observations data sets for evapotranspiration (ET) estimation. The data set was used to run four established ET algorithms: the Priestley–Taylor Jet Propulsion Laboratory model (PT-JPL), the Penman–Monteith algorithm from the MODerate resolution Imaging Spectroradiometer (MODIS) evaporation product (PM-MOD), the Surface Energy Balance System (SEBS) and the Global Land Evaporation Amsterdam Model (GLEAM). In addition, in situ meteorological data from 24 FLUXNET towers were used to force the models, with results from both forcing sets compared tomore » tower-based flux observations. Model performance was assessed on several timescales using both sub-daily and daily forcings. The PT-JPL model and GLEAM provide the best performance for both satellite- and tower-based forcing as well as for the considered temporal resolutions. Simulations using the PM-MOD were mostly underestimated, while the SEBS performance was characterized by a systematic overestimation. In general, all four algorithms produce the best results in wet and moderately wet climate regimes. In dry regimes, the correlation and the absolute agreement with the reference tower ET observations were consistently lower. While ET derived with in situ forcing data agrees best with the tower measurements ( R 2 = 0.67), the agreement of the satellite-based ET estimates is only marginally lower ( R 2 = 0.58). Results also show similar model performance at daily and sub-daily (3-hourly) resolutions. Overall, our validation experiments against in situ measurements indicate that there is no single best-performing algorithm across all biome and forcing types. In conclusion, an extension of the evaluation to a larger selection of 85 towers (model inputs resampled to a common grid to facilitate global estimates) confirmed the original findings.« less

  7. Cloud-Resolving Model Simulations of LBA Convective Systems: Easterly and Westerly Regimes

    NASA Technical Reports Server (NTRS)

    Lang, Stephen E.; Tao, Wei-Kuo

    2002-01-01

    The 3D Goddard Cumulus Ensemble (GCE) model was used to simulate convection that occurred during the TRMM LBA field experiment in Brazil. Convection in this region can be categorized into two different regimes. Low-level easterly flow results in moderate to high CAPE and a drier environment. Convection is more intense like that seen over continents. Low-level westerly flow results in low CAPE and a moist environment. Convection is weaker and more widespread characteristic of oceanic or monsoon-like systems. The GCE model has been used to study both regimes in order to provide cloud data sets that are representative of both environments in support of TRMM rainfall and heating algorithm development. Two different case are presented: Jan 26,1999, an easterly regime case, and Feb 23,1999, a westerly regime case. The Jan 26 case is an organized squall line and is initialized with a standard cold pool. The sensitivity to mid-level sounding moisture and wind shear will also be shown. The Feb 23 case is less-organized with only transient lines and is initialized with either warm bubbles or prescribed surface fluxes. Heating profiles, rainfall statistics and storm characteristics are compared and validated for the two cases against observations collected during the experiment.

  8. Atom based grain extraction and measurement of geometric properties

    NASA Astrophysics Data System (ADS)

    Martine La Boissonière, Gabriel; Choksi, Rustum

    2018-04-01

    We introduce an accurate, self-contained and automatic atom based numerical algorithm to characterize grain distributions in two dimensional Phase Field Crystal (PFC) simulations. We compare the method with hand segmented and known test grain distributions to show that the algorithm is able to extract grains and measure their area, perimeter and other geometric properties with high accuracy. Four input parameters must be set by the user and their influence on the results is described. The method is currently tuned to extract data from PFC simulations in the hexagonal lattice regime but the framework may be extended to more general problems.

  9. Lagrangian motion, coherent structures, and lines of persistent material strain.

    PubMed

    Samelson, R M

    2013-01-01

    Lagrangian motion in geophysical fluids may be strongly influenced by coherent structures that support distinct regimes in a given flow. The problems of identifying and demarcating Lagrangian regime boundaries associated with dynamical coherent structures in a given velocity field can be studied using approaches originally developed in the context of the abstract geometric theory of ordinary differential equations. An essential insight is that when coherent structures exist in a flow, Lagrangian regime boundaries may often be indicated as material curves on which the Lagrangian-mean principal-axis strain is large. This insight is the foundation of many numerical techniques for identifying such features in complex observed or numerically simulated ocean flows. The basic theoretical ideas are illustrated with a simple, kinematic traveling-wave model. The corresponding numerical algorithms for identifying candidate Lagrangian regime boundaries and lines of principal Lagrangian strain (also called Lagrangian coherent structures) are divided into parcel and bundle schemes; the latter include the finite-time and finite-size Lyapunov exponent/Lagrangian strain (FTLE/FTLS and FSLE/FSLS) metrics. Some aspects and results of oceanographic studies based on these approaches are reviewed, and the results are discussed in the context of oceanographic observations of dynamical coherent structures.

  10. Three-Dimensional Radiative Hydrodynamics for Disk Stability Simulations: A Proposed Testing Standard and New Results

    NASA Astrophysics Data System (ADS)

    Boley, Aaron C.; Durisen, Richard H.; Nordlund, Åke; Lord, Jesse

    2007-08-01

    Recent three-dimensional radiative hydrodynamics simulations of protoplanetary disks report disparate disk behaviors, and these differences involve the importance of convection to disk cooling, the dependence of disk cooling on metallicity, and the stability of disks against fragmentation and clump formation. To guarantee trustworthy results, a radiative physics algorithm must demonstrate the capability to handle both the high and low optical depth regimes. We develop a test suite that can be used to demonstrate an algorithm's ability to relax to known analytic flux and temperature distributions, to follow a contracting slab, and to inhibit or permit convection appropriately. We then show that the radiative algorithm employed by Mejía and Boley et al. and the algorithm employed by Cai et al. pass these tests with reasonable accuracy. In addition, we discuss a new algorithm that couples flux-limited diffusion with vertical rays, we apply the test suite, and we discuss the results of evolving the Boley et al. disk with this new routine. Although the outcome is significantly different in detail with the new algorithm, we obtain the same qualitative answers. Our disk does not cool fast due to convection, and it is stable to fragmentation. We find an effective α~10-2. In addition, transport is dominated by low-order modes.

  11. A Unified Estimation Framework for State-Related Changes in Effective Brain Connectivity.

    PubMed

    Samdin, S Balqis; Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain

    2017-04-01

    This paper addresses the critical problem of estimating time-evolving effective brain connectivity. Current approaches based on sliding window analysis or time-varying coefficient models do not simultaneously capture both slow and abrupt changes in causal interactions between different brain regions. To overcome these limitations, we develop a unified framework based on a switching vector autoregressive (SVAR) model. Here, the dynamic connectivity regimes are uniquely characterized by distinct vector autoregressive (VAR) processes and allowed to switch between quasi-stationary brain states. The state evolution and the associated directed dependencies are defined by a Markov process and the SVAR parameters. We develop a three-stage estimation algorithm for the SVAR model: 1) feature extraction using time-varying VAR (TV-VAR) coefficients, 2) preliminary regime identification via clustering of the TV-VAR coefficients, 3) refined regime segmentation by Kalman smoothing and parameter estimation via expectation-maximization algorithm under a state-space formulation, using initial estimates from the previous two stages. The proposed framework is adaptive to state-related changes and gives reliable estimates of effective connectivity. Simulation results show that our method provides accurate regime change-point detection and connectivity estimates. In real applications to brain signals, the approach was able to capture directed connectivity state changes in functional magnetic resonance imaging data linked with changes in stimulus conditions, and in epileptic electroencephalograms, differentiating ictal from nonictal periods. The proposed framework accurately identifies state-dependent changes in brain network and provides estimates of connectivity strength and directionality. The proposed approach is useful in neuroscience studies that investigate the dynamics of underlying brain states.

  12. Dynamic trajectory analysis of superparamagnetic beads driven by on-chip micromagnets

    PubMed Central

    Abedini-Nassab, Roozbeh; Lim, Byeonghwa; Yang, Ye; Howdyshell, Marci; Sooryakumar, Ratnasingham; Yellen, Benjamin B.

    2015-01-01

    We investigate the non-linear dynamics of superparamagnetic beads moving around the periphery of patterned magnetic disks in the presence of an in-plane rotating magnetic field. Three different dynamical regimes are observed in experiments, including (1) phase-locked motion at low driving frequencies, (2) phase-slipping motion above the first critical frequency fc1, and (3) phase-insulated motion above the second critical frequency fc2. Experiments with Janus particles were used to confirm that the beads move by sliding rather than rolling. The rest of the experiments were conducted on spherical, isotropic magnetic beads, in which automated particle position tracking algorithms were used to analyze the bead dynamics. Experimental results in the phase-locked and phase-slipping regimes correlate well with numerical simulations. Additional assumptions are required to predict the onset of the phase-insulated regime, in which the beads are trapped in closed orbits; however, the origin of the phase-insulated state appears to result from local magnetization defects. These results indicate that these three dynamical states are universal properties of bead motion in non-uniform oscillators. PMID:26648596

  13. A genetic algorithm-based optimization model for pool boiling heat transfer on horizontal rod heaters at isolated bubble regime

    NASA Astrophysics Data System (ADS)

    Alavi Fazel, S. Ali

    2017-09-01

    A new optimized model which can predict the heat transfer in the nucleate boiling at isolated bubble regime is proposed for pool boiling on a horizontal rod heater. This model is developed based on the results of direct observations of the physical boiling phenomena. Boiling heat flux, wall temperature, bubble departing diameter, bubble generation frequency and bubble nucleation site density have been experimentally measured. Water and ethanol have been used as two different boiling fluids. Heating surface was made by several metals and various degrees of roughness. The mentioned model considers various mechanisms such as latent heat transfer due to micro-layer evaporation, transient conduction due to thermal boundary layer reformation, natural convection, heat transfer due to the sliding bubbles and bubble super-heating. The fractional contributions of individual mentioned heat transfer mechanisms have been calculated by genetic algorithm. The results show that at wall temperature difference more that about 3 K, bubble sliding transient conduction, non-sliding transient conduction, micro-layer evaporation, natural convection, radial forced convection and bubble super-heating have higher to lower fractional contributions respectively. The performance of the new optimized model has been verified by comparison of the existing experimental data.

  14. A Rapid Convergent Low Complexity Interference Alignment Algorithm for Wireless Sensor Networks.

    PubMed

    Jiang, Lihui; Wu, Zhilu; Ren, Guanghui; Wang, Gangyi; Zhao, Nan

    2015-07-29

    Interference alignment (IA) is a novel technique that can effectively eliminate the interference and approach the sum capacity of wireless sensor networks (WSNs) when the signal-to-noise ratio (SNR) is high, by casting the desired signal and interference into different signal subspaces. The traditional alternating minimization interference leakage (AMIL) algorithm for IA shows good performance in high SNR regimes, however, the complexity of the AMIL algorithm increases dramatically as the number of users and antennas increases, posing limits to its applications in the practical systems. In this paper, a novel IA algorithm, called directional quartic optimal (DQO) algorithm, is proposed to minimize the interference leakage with rapid convergence and low complexity. The properties of the AMIL algorithm are investigated, and it is discovered that the difference between the two consecutive iteration results of the AMIL algorithm will approximately point to the convergence solution when the precoding and decoding matrices obtained from the intermediate iterations are sufficiently close to their convergence values. Based on this important property, the proposed DQO algorithm employs the line search procedure so that it can converge to the destination directly. In addition, the optimal step size can be determined analytically by optimizing a quartic function. Numerical results show that the proposed DQO algorithm can suppress the interference leakage more rapidly than the traditional AMIL algorithm, and can achieve the same level of sum rate as that of AMIL algorithm with far less iterations and execution time.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Xiao; Blazek, Jonathan A.; McEwen, Joseph E.

    Cosmological perturbation theory is a powerful tool to predict the statistics of large-scale structure in the weakly non-linear regime, but even at 1-loop order it results in computationally expensive mode-coupling integrals. Here we present a fast algorithm for computing 1-loop power spectra of quantities that depend on the observer's orientation, thereby generalizing the FAST-PT framework (McEwen et al., 2016) that was originally developed for scalars such as the matter density. This algorithm works for an arbitrary input power spectrum and substantially reduces the time required for numerical evaluation. We apply the algorithm to four examples: intrinsic alignments of galaxies inmore » the tidal torque model; the Ostriker-Vishniac effect; the secondary CMB polarization due to baryon flows; and the 1-loop matter power spectrum in redshift space. Code implementing this algorithm and these applications is publicly available at https://github.com/JoeMcEwen/FAST-PT.« less

  16. Application of the gravity search algorithm to multi-reservoir operation optimization

    NASA Astrophysics Data System (ADS)

    Bozorg-Haddad, Omid; Janbaz, Mahdieh; Loáiciga, Hugo A.

    2016-12-01

    Complexities in river discharge, variable rainfall regime, and drought severity merit the use of advanced optimization tools in multi-reservoir operation. The gravity search algorithm (GSA) is an evolutionary optimization algorithm based on the law of gravity and mass interactions. This paper explores the GSA's efficacy for solving benchmark functions, single reservoir, and four-reservoir operation optimization problems. The GSA's solutions are compared with those of the well-known genetic algorithm (GA) in three optimization problems. The results show that the GSA's results are closer to the optimal solutions than the GA's results in minimizing the benchmark functions. The average values of the objective function equal 1.218 and 1.746 with the GSA and GA, respectively, in solving the single-reservoir hydropower operation problem. The global solution equals 1.213 for this same problem. The GSA converged to 99.97% of the global solution in its average-performing history, while the GA converged to 97% of the global solution of the four-reservoir problem. Requiring fewer parameters for algorithmic implementation and reaching the optimal solution in fewer number of functional evaluations are additional advantages of the GSA over the GA. The results of the three optimization problems demonstrate a superior performance of the GSA for optimizing general mathematical problems and the operation of reservoir systems.

  17. Effects of activity and energy budget balancing algorithm on laboratory performance of a fish bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; David, Solomon R.; Pothoven, Steven A.

    2012-01-01

    We evaluated the performance of the Wisconsin bioenergetics model for lake trout Salvelinus namaycush that were fed ad libitum in laboratory tanks under regimes of low activity and high activity. In addition, we compared model performance under two different model algorithms: (1) balancing the lake trout energy budget on day t based on lake trout energy density on day t and (2) balancing the lake trout energy budget on day t based on lake trout energy density on day t + 1. Results indicated that the model significantly underestimated consumption for both inactive and active lake trout when algorithm 1 was used and that the degree of underestimation was similar for the two activity levels. In contrast, model performance substantially improved when using algorithm 2, as no detectable bias was found in model predictions of consumption for inactive fish and only a slight degree of overestimation was detected for active fish. The energy budget was accurately balanced by using algorithm 2 but not by using algorithm 1. Based on the results of this study, we recommend the use of algorithm 2 to estimate food consumption by fish in the field. Our study results highlight the importance of accurately accounting for changes in fish energy density when balancing the energy budget; furthermore, these results have implications for the science of evaluating fish bioenergetics model performance and for more accurate estimation of food consumption by fish in the field when fish energy density undergoes relatively rapid changes.

  18. Pulse shape optimization for electron-positron production in rotating fields

    NASA Astrophysics Data System (ADS)

    Fillion-Gourdeau, François; Hebenstreit, Florian; Gagnon, Denis; MacLean, Steve

    2017-07-01

    We optimize the pulse shape and polarization of time-dependent electric fields to maximize the production of electron-positron pairs via strong field quantum electrodynamics processes. The pulse is parametrized in Fourier space by a B -spline polynomial basis, which results in a relatively low-dimensional parameter space while still allowing for a large number of electric field modes. The optimization is performed by using a parallel implementation of the differential evolution, one of the most efficient metaheuristic algorithms. The computational performance of the numerical method and the results on pair production are compared with a local multistart optimization algorithm. These techniques allow us to determine the pulse shape and field polarization that maximize the number of produced pairs in computationally accessible regimes.

  19. Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms

    NASA Technical Reports Server (NTRS)

    Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.; hide

    2010-01-01

    INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.

  20. Using Clustering to Establish Climate Regimes from PCM Output

    NASA Technical Reports Server (NTRS)

    Oglesby, Robert; Arnold, James E. (Technical Monitor); Hoffman, Forrest; Hargrove, W. W.; Erickson, D.

    2002-01-01

    A multivariate statistical clustering technique--based on the k-means algorithm of Hartigan has been used to extract patterns of climatological significance from 200 years of general circulation model (GCM) output. Originally developed and implemented on a Beowulf-style parallel computer constructed by Hoffman and Hargrove from surplus commodity desktop PCs, the high performance parallel clustering algorithm was previously applied to the derivation of ecoregions from map stacks of 9 and 25 geophysical conditions or variables for the conterminous U.S. at a resolution of 1 sq km. Now applied both across space and through time, the clustering technique yields temporally-varying climate regimes predicted by transient runs of the Parallel Climate Model (PCM). Using a business-as-usual (BAU) scenario and clustering four fields of significance to the global water cycle (surface temperature, precipitation, soil moisture, and snow depth) from 1871 through 2098, the authors' analysis shows an increase in spatial area occupied by the cluster or climate regime which typifies desert regions (i.e., an increase in desertification) and a decrease in the spatial area occupied by the climate regime typifying winter-time high latitude perma-frost regions. The patterns of cluster changes have been analyzed to understand the predicted variability in the water cycle on global and continental scales. In addition, representative climate regimes were determined by taking three 10-year averages of the fields 100 years apart for northern hemisphere winter (December, January, and February) and summer (June, July, and August). The result is global maps of typical seasonal climate regimes for 100 years in the past, for the present, and for 100 years into the future. Using three-dimensional data or phase space representations of these climate regimes (i.e., the cluster centroids), the authors demonstrate the portion of this phase space occupied by the land surface at all points in space and time. Any single spot on the globe will exist in one of these climate regimes at any single point in time. By incrementing time, that same spot will trace out a trajectory or orbit between and among these climate regimes (or atmospheric states) in phase (or state) space. When a geographic region enters a state it never previously visited, a climatic change is said to have occurred. Tracing out the entire trajectory of a single spot on the globe yields a 'manifold' in state space representing the shape of its predicted climate occupancy. This sort of analysis enables a researcher to more easily grasp the multivariate behavior of the climate system.

  1. A multi-site reconstruction algorithm for bottom-up vulnerability assessment of water resource systems to changing streamflow conditions

    NASA Astrophysics Data System (ADS)

    Nazemi, A.; Zaerpour, M.

    2016-12-01

    Current paradigm for assessing the vulnerability of water resource systems to changing streamflow conditions often involves a cascade application of climate and hydrological models to project the future states of streamflow regime, entering to a given water resource system. It is widely warned, however, that the overall uncertainty in this "top-down" modeling enterprise can be large due to the limitations in representing natural and anthropogenic processes that affect future streamflow variability and change. To address this, various types of stress-tests are suggested to assess the vulnerability of water resources systems under a wide range of possible changes in streamflow conditions. The scope of such "bottom-up" assessments can go well beyond top-down projections and therefore provide a basis for monitoring different response modes, under which water resource systems become vulnerable. Despite methodological differences, all bottom-up assessments are equipped with a systematic sampling procedure, with which different possibilities for future climate and/or streamflow conditions can be realized. Regardless of recent developments, currently available streamflow sampling algorithms are still limited, particularly in regional contexts, for which accurate representation of spatiotemporal dependencies in streamflow regime are of major importance. In this presentation, we introduce a new development that enables handling temporal and spatial dependencies in regional streamflow regimes through a unified stochastic reconstruction algorithm. We demonstrate the application of this algorithm accross various Canadian regions. By considering a real-world regional water resources system, we show how the new multi-site reconstruction algorithm can extend the practical utility of bottom-up vulnerability assessment and improve quantifying the associated risk in natural and anthropogenic water systems under unknown future conditions.

  2. Quantum plug n’ play: modular computation in the quantum regime

    NASA Astrophysics Data System (ADS)

    Thompson, Jayne; Modi, Kavan; Vedral, Vlatko; Gu, Mile

    2018-01-01

    Classical computation is modular. It exploits plug n’ play architectures which allow us to use pre-fabricated circuits without knowing their construction. This bestows advantages such as allowing parts of the computational process to be outsourced, and permitting individual circuit components to be exchanged and upgraded. Here, we introduce a formal framework to describe modularity in the quantum regime. We demonstrate a ‘no-go’ theorem, stipulating that it is not always possible to make use of quantum circuits without knowing their construction. This has significant consequences for quantum algorithms, forcing the circuit implementation of certain quantum algorithms to be rebuilt almost entirely from scratch after incremental changes in the problem—such as changing the number being factored in Shor’s algorithm. We develop a workaround capable of restoring modularity, and apply it to design a modular version of Shor’s algorithm that exhibits increased versatility and reduced complexity. In doing so we pave the way to a realistic framework whereby ‘quantum chips’ and remote servers can be invoked (or assembled) to implement various parts of a more complex quantum computation.

  3. The human body metabolism process mathematical simulation based on Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Oliynyk, Andriy; Oliynyk, Eugene; Pyptiuk, Olexandr; DzierŻak, RóŻa; Szatkowska, Małgorzata; Uvaysova, Svetlana; Kozbekova, Ainur

    2017-08-01

    The mathematical model of metabolism process in human organism based on Lotka-Volterra model has beeng proposed, considering healing regime, nutrition system, features of insulin and sugar fragmentation process in the organism. The numerical algorithm of the model using IV-order Runge-Kutta method has been realized. After the result of calculations the conclusions have been made, recommendations about using the modeling results have been showed, the vectors of the following researches are defined.

  4. Quantum algorithms for quantum field theories.

    PubMed

    Jordan, Stephen P; Lee, Keith S M; Preskill, John

    2012-06-01

    Quantum field theory reconciles quantum mechanics and special relativity, and plays a central role in many areas of physics. We developed a quantum algorithm to compute relativistic scattering probabilities in a massive quantum field theory with quartic self-interactions (φ(4) theory) in spacetime of four and fewer dimensions. Its run time is polynomial in the number of particles, their energy, and the desired precision, and applies at both weak and strong coupling. In the strong-coupling and high-precision regimes, our quantum algorithm achieves exponential speedup over the fastest known classical algorithm.

  5. On the accuracy of the LSC-IVR approach for excitation energy transfer in molecular aggregates

    NASA Astrophysics Data System (ADS)

    Teh, Hung-Hsuan; Cheng, Yuan-Chung

    2017-04-01

    We investigate the applicability of the linearized semiclassical initial value representation (LSC-IVR) method to excitation energy transfer (EET) problems in molecular aggregates by simulating the EET dynamics of a dimer model in a wide range of parameter regime and comparing the results to those obtained from a numerically exact method. It is found that the LSC-IVR approach yields accurate population relaxation rates and decoherence rates in a broad parameter regime. However, the classical approximation imposed by the LSC-IVR method does not satisfy the detailed balance condition, generally leading to incorrect equilibrium populations. Based on this observation, we propose a post-processing algorithm to solve the long time equilibrium problem and demonstrate that this long-time correction method successfully removed the deviations from exact results for the LSC-IVR method in all of the regimes studied in this work. Finally, we apply the LSC-IVR method to simulate EET dynamics in the photosynthetic Fenna-Matthews-Olson complex system, demonstrating that the LSC-IVR method with long-time correction provides excellent description of coherent EET dynamics in this typical photosynthetic pigment-protein complex.

  6. Stratiform/convective rain delineation for TRMM microwave imager

    NASA Astrophysics Data System (ADS)

    Islam, Tanvir; Srivastava, Prashant K.; Dai, Qiang; Gupta, Manika; Wan Jaafar, Wan Zurina

    2015-10-01

    This article investigates the potential for using machine learning algorithms to delineate stratiform/convective (S/C) rain regimes for passive microwave imager taking calibrated brightness temperatures as only spectral parameters. The algorithms have been implemented for the Tropical Rainfall Measuring Mission (TRMM) microwave imager (TMI), and calibrated as well as validated taking the Precipitation Radar (PR) S/C information as the target class variables. Two different algorithms are particularly explored for the delineation. The first one is metaheuristic adaptive boosting algorithm that includes the real, gentle, and modest versions of the AdaBoost. The second one is the classical linear discriminant analysis that includes the Fisher's and penalized versions of the linear discriminant analysis. Furthermore, prior to the development of the delineation algorithms, a feature selection analysis has been conducted for a total of 85 features, which contains the combinations of brightness temperatures from 10 GHz to 85 GHz and some derived indexes, such as scattering index, polarization corrected temperature, and polarization difference with the help of mutual information aided minimal redundancy maximal relevance criterion (mRMR). It has been found that the polarization corrected temperature at 85 GHz and the features derived from the "addition" operator associated with the 85 GHz channels have good statistical dependency to the S/C target class variables. Further, it has been shown how the mRMR feature selection technique helps to reduce the number of features without deteriorating the results when applying through the machine learning algorithms. The proposed scheme is able to delineate the S/C rain regimes with reasonable accuracy. Based on the statistical validation experience from the validation period, the Matthews correlation coefficients are in the range of 0.60-0.70. Since, the proposed method does not rely on any a priori information, this makes it very suitable for other microwave sensors having similar channels to the TMI. The method could possibly benefit the constellation sensors in the Global Precipitation Measurement (GPM) mission era.

  7. Electron-Phonon Systems on a Universal Quantum Computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macridin, Alexandru; Spentzouris, Panagiotis; Amundson, James

    We present an algorithm that extends existing quantum algorithms forsimulating fermion systems in quantum chemistry and condensed matter physics toinclude phonons. The phonon degrees of freedom are represented with exponentialaccuracy on a truncated Hilbert space with a size that increases linearly withthe cutoff of the maximum phonon number. The additional number of qubitsrequired by the presence of phonons scales linearly with the size of thesystem. The additional circuit depth is constant for systems with finite-rangeelectron-phonon and phonon-phonon interactions and linear for long-rangeelectron-phonon interactions. Our algorithm for a Holstein polaron problem wasimplemented on an Atos Quantum Learning Machine (QLM) quantum simulatoremployingmore » the Quantum Phase Estimation method. The energy and the phonon numberdistribution of the polaron state agree with exact diagonalization results forweak, intermediate and strong electron-phonon coupling regimes.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less

  9. Small-Noise Analysis and Symmetrization of Implicit Monte Carlo Samplers

    DOE PAGES

    Goodman, Jonathan; Lin, Kevin K.; Morzfeld, Matthias

    2015-07-06

    Implicit samplers are algorithms for producing independent, weighted samples from multivariate probability distributions. These are often applied in Bayesian data assimilation algorithms. We use Laplace asymptotic expansions to analyze two implicit samplers in the small noise regime. Our analysis suggests a symmetrization of the algorithms that leads to improved implicit sampling schemes at a relatively small additional cost. Here, computational experiments confirm the theory and show that symmetrization is effective for small noise sampling problems.

  10. Using HFire for spatial modeling of fire in shrublands

    Treesearch

    Seth H. Peterson; Marco E. Morais; Jean M. Carlson; Philip E. Dennison; Dar A. Roberts; Max A. Moritz; David R. Weise

    2009-01-01

    An efficient raster fire-spread model named HFire is introduced. HFire can simulate single-fire events or long-term fire regimes, using the same fire-spread algorithm. This paper describes the HFire algorithm, benchmarks the model using a standard set of tests developed for FARSITE, and compares historical and predicted fire spread perimeters for three southern...

  11. MODIS Retrievals of Cloud Optical Thickness and Particle Radius

    NASA Technical Reports Server (NTRS)

    Platnick, S.; King, M. D.; Ackerman, S. A.; Gray, M.; Moody, E.; Arnold, G. T.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) provides an unprecedented opportunity for global cloud studies with 36 spectral bands from the visible through the infrared, and spatial resolution from 250 m to 1 km at nadir. In particular, all solar window bands useful for simultaneous retrievals of cloud optical thickness and particle size (0.67, 0.86, 1.2, 1.6, 2.1, and 3.7 micron bands) are now available on a single satellite instrument/platform for the first time. An operational algorithm for the retrieval of these optical and cloud physical properties (including water path) have been developed for both liquid and ice phase clouds. The product is archived into two categories: pixel-level retrievals at 1 km spatial resolution (referred to as a Level-2 product) and global gridded statistics (Level-3 product). An overview of the MODIS cloud retrieval algorithm and early level-2 and -3 results will be presented. A number of MODIS cloud validation activities are being planned, including the recent Southern Africa Regional Science Initiative 2000 (SAFARI-2000) dry season campaign conducted in August/September 2000. The later part of the experiment concentrated on MODIS validation in the Namibian stratocumulus regime off the southwest coast of Africa. Early retrieval results from this regime will be discussed.

  12. Modified echo peak correction for radial acquisition regime (RADAR).

    PubMed

    Takizawa, Masahiro; Ito, Taeko; Itagaki, Hiroyuki; Takahashi, Tetsuhiko; Shimizu, Kanichirou; Harada, Junta

    2009-01-01

    Because radial sampling imposes many limitations on magnetic resonance (MR) imaging hardware, such as on the accuracy of the gradient magnetic field or the homogeneity of B(0), some correction of the echo signal is usually needed before image reconstruction. In our previous study, we developed an echo-peak-shift correction (EPSC) algorithm not easily affected by hardware performance. However, some artifacts remained in lung imaging, where tissue is almost absent, or in cardiac imaging, which is affected by blood flow. In this study, we modified the EPSC algorithm to improve the image quality of the radial aquisition regime (RADAR) and expand its application sequences. We assumed the artifacts were mainly caused by errors in the phase map for EPSC and used a phantom on a 1.5-tesla (T) MR scanner to investigate whether to modify the EPSC algorithm. To evaluate the effectiveness of EPSC, we compared results from T(1)- and T(2)-weighted images of a volunteer's lung region using the current and modified EPSC. We then applied the modified EPSC to RADAR spin echo (SE) and RADAR balanced steady-state acquisition with rewound gradient echo (BASG) sequence. The modified EPSC reduced phase discontinuity in the reference data used for EPSC and improved visualization of blood vessels in the lungs. Motion and blood flow caused no visible artifacts in the resulting images in either RADAR SE or RADAR BASG sequence. Use of the modified EPSC eliminated artifacts caused by signal loss in the reference data for EPSC. In addition, the modified EPSC was applied to RADAR SE and RADAR BASG sequences.

  13. From inverse problems to learning: a Statistical Mechanics approach

    NASA Astrophysics Data System (ADS)

    Baldassi, Carlo; Gerace, Federica; Saglietti, Luca; Zecchina, Riccardo

    2018-01-01

    We present a brief introduction to the statistical mechanics approaches for the study of inverse problems in data science. We then provide concrete new results on inferring couplings from sampled configurations in systems characterized by an extensive number of stable attractors in the low temperature regime. We also show how these result are connected to the problem of learning with realistic weak signals in computational neuroscience. Our techniques and algorithms rely on advanced mean-field methods developed in the context of disordered systems.

  14. Toward Real-Time Classification of Wake Regimes from Sensor Measurements

    NASA Astrophysics Data System (ADS)

    Wang, Mengying; Hemati, Maziar S.

    2017-11-01

    Hydrodynamic signals can transmit information that can be used by marine swimmers to detect disturbances in the local environment. Biological swimmers are able to sense and detect these signals with their hydrodynamic receptor systems. Recently, similar flow sensing systems have been developed with an aim to improve swimming efficiency in human-engineered underwater vehicles. A key part of the sensing strategy is to first classify wake structures in the external fluid, then to execute suitable control actions accordingly. In our previous work, we showed that a variety of 2S and 2P wakes can be distinguished based on time signatures of surface sensor measurements. However, we assumed access to the full dataset. In this talk, we extend our previous findings to classify wake regimes from sensor measurements in real-time, using a recursive Fast Fourier Transform algorithm. Wakes in different dynamical regimes, which may also vary in time, can be distinguished using our approach. Our results provide insights for enhancing hydrodynamic sensory capabilities in human-engineered systems.

  15. Evaluation of the MV (CAPON) Coherent Doppler Lidar Velocity Estimator

    NASA Technical Reports Server (NTRS)

    Lottman, B.; Frehlich, R.

    1997-01-01

    The performance of the CAPON velocity estimator for coherent Doppler lidar is determined for typical space-based and ground-based parameter regimes. Optimal input parameters for the algorithm were determined for each regime. For weak signals, performance is described by the standard deviation of the good estimates and the fraction of outliers. For strong signals, the fraction of outliers is zero. Numerical effort was also determined.

  16. Optimization of artificial neural network models through genetic algorithms for surface ozone concentration forecasting.

    PubMed

    Pires, J C M; Gonçalves, B; Azevedo, F G; Carneiro, A P; Rego, N; Assembleia, A J B; Lima, J F B; Silva, P A; Alves, C; Martins, F G

    2012-09-01

    This study proposes three methodologies to define artificial neural network models through genetic algorithms (GAs) to predict the next-day hourly average surface ozone (O(3)) concentrations. GAs were applied to define the activation function in hidden layer and the number of hidden neurons. Two of the methodologies define threshold models, which assume that the behaviour of the dependent variable (O(3) concentrations) changes when it enters in a different regime (two and four regimes were considered in this study). The change from one regime to another depends on a specific value (threshold value) of an explanatory variable (threshold variable), which is also defined by GAs. The predictor variables were the hourly average concentrations of carbon monoxide (CO), nitrogen oxide, nitrogen dioxide (NO(2)), and O(3) (recorded in the previous day at an urban site with traffic influence) and also meteorological data (hourly averages of temperature, solar radiation, relative humidity and wind speed). The study was performed for the period from May to August 2004. Several models were achieved and only the best model of each methodology was analysed. In threshold models, the variables selected by GAs to define the O(3) regimes were temperature, CO and NO(2) concentrations, due to their importance in O(3) chemistry in an urban atmosphere. In the prediction of O(3) concentrations, the threshold model that considers two regimes was the one that fitted the data most efficiently.

  17. Three-dimensional single-cell imaging with X-ray waveguides in the holographic regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krenkel, Martin; Toepperwien, Mareike; Alves, Frauke

    X-ray tomography at the level of single biological cells is possible in a low-dose regime, based on full-field holographic recordings, with phase contrast originating from free-space wave propagation. Building upon recent progress in cellular imaging based on the illumination by quasi-point sources provided by X-ray waveguides, here this approach is extended in several ways. First, the phase-retrieval algorithms are extended by an optimized deterministic inversion, based on a multi-distance recording. Second, different advanced forms of iterative phase retrieval are used, operational for single-distance and multi-distance recordings. Results are compared for several different preparations of macrophage cells, for different staining andmore » labelling. As a result, it is shown that phase retrieval is no longer a bottleneck for holographic imaging of cells, and how advanced schemes can be implemented to cope also with high noise and inconsistencies in the data.« less

  18. Three-dimensional single-cell imaging with X-ray waveguides in the holographic regime

    DOE PAGES

    Krenkel, Martin; Toepperwien, Mareike; Alves, Frauke; ...

    2017-06-29

    X-ray tomography at the level of single biological cells is possible in a low-dose regime, based on full-field holographic recordings, with phase contrast originating from free-space wave propagation. Building upon recent progress in cellular imaging based on the illumination by quasi-point sources provided by X-ray waveguides, here this approach is extended in several ways. First, the phase-retrieval algorithms are extended by an optimized deterministic inversion, based on a multi-distance recording. Second, different advanced forms of iterative phase retrieval are used, operational for single-distance and multi-distance recordings. Results are compared for several different preparations of macrophage cells, for different staining andmore » labelling. As a result, it is shown that phase retrieval is no longer a bottleneck for holographic imaging of cells, and how advanced schemes can be implemented to cope also with high noise and inconsistencies in the data.« less

  19. Hypersonic research at Stanford University

    NASA Technical Reports Server (NTRS)

    Candler, Graham; Maccormack, Robert

    1988-01-01

    The status of the hypersonic research program at Stanford University is discussed and recent results are highlighted. The main areas of interest in the program are the numerical simulation of radiating, reacting and thermally excited flows, the investigation and numerical solution of hypersonic shock wave physics, the extension of the continuum fluid dynamic equations to the transition regime between continuum and free-molecule flow, and the development of novel numerical algorithms for efficient particulate simulations of flowfields.

  20. Time series modeling and forecasting using memetic algorithms for regime-switching models.

    PubMed

    Bergmeir, Christoph; Triguero, Isaac; Molina, Daniel; Aznarte, José Luis; Benitez, José Manuel

    2012-11-01

    In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.

  1. Modified Monte Carlo method for study of electron transport in degenerate electron gas in the presence of electron-electron interactions, application to graphene

    NASA Astrophysics Data System (ADS)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2017-07-01

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.

  2. Automated contact angle estimation for three-dimensional X-ray microtomography data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Katherine A.; Moriarty, Dylan; Yoon, Hongkyu

    2015-11-10

    Multiphase flow in capillary regimes is a fundamental process in a number of geoscience applications. The ability to accurately define wetting characteristics of porous media can have a large impact on numerical models. In this paper, a newly developed automated three-dimensional contact angle algorithm is described and applied to high-resolution X-ray microtomography data from multiphase bead pack experiments with varying wettability characteristics. The algorithm calculates the contact angle by finding the angle between planes fit to each solid/fluid and fluid/fluid interface in the region surrounding each solid/fluid/fluid contact point. Results show that the algorithm is able to reliably compute contactmore » angles using the experimental data. The in situ contact angles are typically larger than flat surface laboratory measurements using the same material. Furthermore, wetting characteristics in mixed-wet systems also change significantly after displacement cycles.« less

  3. An efficient iteration strategy for the solution of the Euler equations

    NASA Technical Reports Server (NTRS)

    Walters, R. W.; Dwoyer, D. L.

    1985-01-01

    A line Gauss-Seidel (LGS) relaxation algorithm in conjunction with a one-parameter family of upwind discretizations of the Euler equations in two-dimensions is described. The basic algorithm has the property that convergence to the steady-state is quadratic for fully supersonic flows and linear otherwise. This is in contrast to the block ADI methods (either central or upwind differenced) and the upwind biased relaxation schemes, all of which converge linearly, independent of the flow regime. Moreover, the algorithm presented here is easily enhanced to detect regions of subsonic flow embedded in supersonic flow. This allows marching by lines in the supersonic regions, converging each line quadratically, and iterating in the subsonic regions, thus yielding a very efficient iteration strategy. Numerical results are presented for two-dimensional supersonic and transonic flows containing both oblique and normal shock waves which confirm the efficiency of the iteration strategy.

  4. Efficient solutions to the Euler equations for supersonic flow with embedded subsonic regions

    NASA Technical Reports Server (NTRS)

    Walters, Robert W.; Dwoyer, Douglas L.

    1987-01-01

    A line Gauss-Seidel (LGS) relaxation algorithm in conjunction with a one-parameter family of upwind discretizations of the Euler equations in two dimensions is described. Convergence of the basic algorithm to the steady state is quadratic for fully supersonic flows and is linear for other flows. This is in contrast to the block alternating direction implicit methods (either central or upwind differenced) and the upwind biased relaxation schemes, all of which converge linearly, independent of the flow regime. Moreover, the algorithm presented herein is easily coupled with methods to detect regions of subsonic flow embedded in supersonic flow. This allows marching by lines in the supersonic regions, converging each line quadratically, and iterating in the subsonic regions, and yields a very efficient iteration strategy. Numerical results are presented for two-dimensional supersonic and transonic flows containing oblique and normal shock waves which confirm the efficiency of the iteration strategy.

  5. Bubble suspension rheology and implications for conduit flow

    NASA Astrophysics Data System (ADS)

    Llewellin, E. W.; Manga, M.

    2005-05-01

    Bubbles are ubiquitous in magma during eruption and influence the rheology of the suspension. Despite this, bubble-suspension rheology is routinely ignored in conduit-flow and eruption models, potentially impairing accuracy and resulting in the loss of important phenomenological richness. The omission is due, in part, to a historical confusion in the literature concerning the effect of bubbles on the rheology of a liquid. This confusion has now been largely resolved and recently published studies have identified two viscous regimes: in regime 1, the viscosity of the two-phase (magma-gas) suspension increases as gas volume fraction ϕ increases; in regime 2, the viscosity of the suspension decreases as ϕ increases. The viscous regime for a deforming bubble suspension can be determined by calculating two dimensionless numbers, the capillary number Ca and the dynamic capillary number Cd. We provide a didactic explanation of how to include the effect of bubble-suspension rheology in continuum, conduit-flow models. Bubble-suspension rheology is reviewed and a practical rheological model is presented, followed by an algorithmic, step-by-step guide to including the rheological model in conduit-flow models. Preliminary results from conduit-flow models which have implemented the model presented are discussed and it is concluded that the effect of bubbles on magma rheology may be important in nature and results in a decrease of at least 800 m in calculated fragmentation-depth and an increase of between 40% and 250% in calculated eruption-rate compared with the assumption of Newtonian rheology.

  6. Rarefied gas flow simulations using high-order gas-kinetic unified algorithms for Boltzmann model equations

    NASA Astrophysics Data System (ADS)

    Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen

    2015-04-01

    This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive, nevertheless, the present GKUAs for kinetic model Boltzmann equations in conjunction with current available high-performance parallel computer power can provide a vital engineering tool for analyzing rarefied gas flows covering the whole range of flow regimes in aerospace engineering applications.

  7. X-Ray Phase Imaging for Breast Cancer Detection

    DTIC Science & Technology

    2012-09-01

    the Gerchberg-Saxton algorithm in the Fresnel diffraction regime, and is much more robust against image noise than the TIE-based method. For details...developed efficient coding with the software modules for the image registration, flat-filed correction , and phase retrievals. In addition, we...X, Liu H. 2010. Performance analysis of the attenuation-partition based iterative phase retrieval algorithm for in-line phase-contrast imaging

  8. Cross-over between discrete and continuous protein structure space: insights into automatic classification and networks of protein structures.

    PubMed

    Pascual-García, Alberto; Abia, David; Ortiz, Angel R; Bastolla, Ugo

    2009-03-01

    Structural classifications of proteins assume the existence of the fold, which is an intrinsic equivalence class of protein domains. Here, we test in which conditions such an equivalence class is compatible with objective similarity measures. We base our analysis on the transitive property of the equivalence relationship, requiring that similarity of A with B and B with C implies that A and C are also similar. Divergent gene evolution leads us to expect that the transitive property should approximately hold. However, if protein domains are a combination of recurrent short polypeptide fragments, as proposed by several authors, then similarity of partial fragments may violate the transitive property, favouring the continuous view of the protein structure space. We propose a measure to quantify the violations of the transitive property when a clustering algorithm joins elements into clusters, and we find out that such violations present a well defined and detectable cross-over point, from an approximately transitive regime at high structure similarity to a regime with large transitivity violations and large differences in length at low similarity. We argue that protein structure space is discrete and hierarchic classification is justified up to this cross-over point, whereas at lower similarities the structure space is continuous and it should be represented as a network. We have tested the qualitative behaviour of this measure, varying all the choices involved in the automatic classification procedure, i.e., domain decomposition, alignment algorithm, similarity score, and clustering algorithm, and we have found out that this behaviour is quite robust. The final classification depends on the chosen algorithms. We used the values of the clustering coefficient and the transitivity violations to select the optimal choices among those that we tested. Interestingly, this criterion also favours the agreement between automatic and expert classifications. As a domain set, we have selected a consensus set of 2,890 domains decomposed very similarly in SCOP and CATH. As an alignment algorithm, we used a global version of MAMMOTH developed in our group, which is both rapid and accurate. As a similarity measure, we used the size-normalized contact overlap, and as a clustering algorithm, we used average linkage. The resulting automatic classification at the cross-over point was more consistent than expert ones with respect to the structure similarity measure, with 86% of the clusters corresponding to subsets of either SCOP or CATH superfamilies and fewer than 5% containing domains in distinct folds according to both SCOP and CATH. Almost 15% of SCOP superfamilies and 10% of CATH superfamilies were split, consistent with the notion of fold change in protein evolution. These results were qualitatively robust for all choices that we tested, although we did not try to use alignment algorithms developed by other groups. Folds defined in SCOP and CATH would be completely joined in the regime of large transitivity violations where clustering is more arbitrary. Consistently, the agreement between SCOP and CATH at fold level was lower than their agreement with the automatic classification obtained using as a clustering algorithm, respectively, average linkage (for SCOP) or single linkage (for CATH). The networks representing significant evolutionary and structural relationships between clusters beyond the cross-over point may allow us to perform evolutionary, structural, or functional analyses beyond the limits of classification schemes. These networks and the underlying clusters are available at http://ub.cbm.uam.es/research/ProtNet.php.

  9. Computer Algorithms for Measurement Control and Signal Processing of Transient Scattering Signatures

    DTIC Science & Technology

    1988-09-01

    CURVE * C Y2 IS THE BACKGROUND CURVE * C NSHIF IS THE NUMBER OF POINT TO SHIFT * C SET IS THE SUM OF THE POINT TO SHIFT * C IN ORDER TO ZERO PADDING ...reduces the spec- tral content in both the low and high frequency regimes. If the threshold is set to zero , a "naive’ deconvolution results. This provides...side of equation 5.2 was close to zero , so it can be neglected. As a result, the expected power is equal to the variance. The signal plus noise power

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalvit, Diego; Messina, Riccardo; Maia Neto, Paulo

    We develop the scattering approach for the dispersive force on a ground state atom on top of a corrugated surface. We present explicit results to first order in the corrugation amplitude. A variety of analytical results are derived in different limiting cases, including the van der Waals and Casimir-Polder regimes. We compute numerically the exact first-order dispersive potential for arbitrary separation distances and corrugation wavelengths, for a Rubidium atom on top of a silicon or gold corrugated surface. We consider in detail the correction to the proximity force approximation, and present a very simple approximation algorithm for computing the potential.

  11. Quantitative phase retrieval with arbitrary pupil and illumination

    DOE PAGES

    Claus, Rene A.; Naulleau, Patrick P.; Neureuther, Andrew R.; ...

    2015-10-02

    We present a general algorithm for combining measurements taken under various illumination and imaging conditions to quantitatively extract the amplitude and phase of an object wave. The algorithm uses the weak object transfer function, which incorporates arbitrary pupil functions and partially coherent illumination. The approach is extended beyond the weak object regime using an iterative algorithm. Finally, we demonstrate the method on measurements of Extreme Ultraviolet Lithography (EUV) multilayer mask defects taken in an EUV zone plate microscope with both a standard zone plate lens and a zone plate implementing Zernike phase contrast.

  12. Depth-resolved analytical model and correction algorithm for photothermal optical coherence tomography

    PubMed Central

    Lapierre-Landry, Maryse; Tucker-Schwartz, Jason M.; Skala, Melissa C.

    2016-01-01

    Photothermal OCT (PT-OCT) is an emerging molecular imaging technique that occupies a spatial imaging regime between microscopy and whole body imaging. PT-OCT would benefit from a theoretical model to optimize imaging parameters and test image processing algorithms. We propose the first analytical PT-OCT model to replicate an experimental A-scan in homogeneous and layered samples. We also propose the PT-CLEAN algorithm to reduce phase-accumulation and shadowing, two artifacts found in PT-OCT images, and demonstrate it on phantoms and in vivo mouse tumors. PMID:27446693

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baart, T. A.; Vandersypen, L. M. K.; Kavli Institute of Nanoscience, Delft University of Technology, P.O. Box 5046, 2600 GA Delft

    We report the computer-automated tuning of gate-defined semiconductor double quantum dots in GaAs heterostructures. We benchmark the algorithm by creating three double quantum dots inside a linear array of four quantum dots. The algorithm sets the correct gate voltages for all the gates to tune the double quantum dots into the single-electron regime. The algorithm only requires (1) prior knowledge of the gate design and (2) the pinch-off value of the single gate T that is shared by all the quantum dots. This work significantly alleviates the user effort required to tune multiple quantum dot devices.

  14. The First Year of Solar-Wind Data From the GENESIS Mission

    NASA Astrophysics Data System (ADS)

    Wiens, R. C.; Barraclough, B. L.; Steinberg, J. T.; Reisenfeld, D. B.; Neugebauer, M.; Burnett, D. S.

    2002-12-01

    The GENESIS mission was launched in August, 2001, and has been in an L1 halo orbit for over a year. The primary purpose of the mission is to collect solar-wind samples that will be returned to Earth in 2004 for high-precision isotopic and elemental analyses. GENESIS uses conventional ion and electron spectrometers to record solar-wind conditions during collection, and to make real-time determinations of the solar-wind regimes to facilitate collection of separate samples of interstream (IS), coronal hole (CH), and coronal mass ejection (CME) flows. Of particular interest is the use of a bi-directional electron (BDE) index to determine the presence of CMEs. And although GENESIS lacks a magnetometer, the field vector, with sign ambiguity, is determined by the electron direction, and matches other spacecraft magnetometer data well. GENESIS in-situ data and on-board regime determinations are available on the web. The data from Fall, 2001 were characterized by numerous CME regimes (comprising 32% of the time in the 4th quarter, based on the on-board algorithm), with little CH flow (only 2%). A strong CH flow was observed every solar rotation from mid-January through late May. June was quiet, nearly all IS flow. The first and second quarters of 2002 were approximately 28% CME flow, with CH flow dropping from 18% to 6%. The discovery of unexpectedly noticeable BDE signals during CH flows at 1 AU (Steinberg et al., 2002) caused us early on to modify our regime selection algorithm to accommodate these. The on-board algorithm intentionally errs on the side of overestimating CME flows in order to keep the CH sample more pure. Comparisons have been made of various compositional parameters determined by Genesis (Barraclough et al., this meeting) and by ACE SWICS (Reisenfeld et al., this meeting) for times corresponding to the Genesis collection periods for each of the three regimes. The Genesis L1 halo orbit is ~0.8 x 0.25 million km radius, somewhat larger than the ~0.3 x 0.2 and ~0.7 x 0.2 million km orbits of ACE and SOHO, respectively, presenting excellent opportunities for multi-spacecraft observations at L1.

  15. A multi-dimensional, energy- and charge-conserving, nonlinearly implicit, electromagnetic Vlasov–Darwin particle-in-cell algorithm

    DOE PAGES

    Chen, G.; Chacón, L.

    2015-08-11

    For decades, the Vlasov–Darwin model has been recognized to be attractive for particle-in-cell (PIC) kinetic plasma simulations in non-radiative electromagnetic regimes, to avoid radiative noise issues and gain computational efficiency. However, the Darwin model results in an elliptic set of field equations that renders conventional explicit time integration unconditionally unstable. We explore a fully implicit PIC algorithm for the Vlasov–Darwin model in multiple dimensions, which overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. The finite-difference scheme for Darwin field equations and particle equations of motion is space–time-centered, employing particle sub-cycling and orbit-averaging. This algorithm conserves total energy, local charge,more » canonical-momentum in the ignorable direction, and preserves the Coulomb gauge exactly. An asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. Finally, we demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 2D–3V.« less

  16. Multi-metric calibration of hydrological model to capture overall flow regimes

    NASA Astrophysics Data System (ADS)

    Zhang, Yongyong; Shao, Quanxi; Zhang, Shifeng; Zhai, Xiaoyan; She, Dunxian

    2016-08-01

    Flow regimes (e.g., magnitude, frequency, variation, duration, timing and rating of change) play a critical role in water supply and flood control, environmental processes, as well as biodiversity and life history patterns in the aquatic ecosystem. The traditional flow magnitude-oriented calibration of hydrological model was usually inadequate to well capture all the characteristics of observed flow regimes. In this study, we simulated multiple flow regime metrics simultaneously by coupling a distributed hydrological model with an equally weighted multi-objective optimization algorithm. Two headwater watersheds in the arid Hexi Corridor were selected for the case study. Sixteen metrics were selected as optimization objectives, which could represent the major characteristics of flow regimes. Model performance was compared with that of the single objective calibration. Results showed that most metrics were better simulated by the multi-objective approach than those of the single objective calibration, especially the low and high flow magnitudes, frequency and variation, duration, maximum flow timing and rating. However, the model performance of middle flow magnitude was not significantly improved because this metric was usually well captured by single objective calibration. The timing of minimum flow was poorly predicted by both the multi-metric and single calibrations due to the uncertainties in model structure and input data. The sensitive parameter values of the hydrological model changed remarkably and the simulated hydrological processes by the multi-metric calibration became more reliable, because more flow characteristics were considered. The study is expected to provide more detailed flow information by hydrological simulation for the integrated water resources management, and to improve the simulation performances of overall flow regimes.

  17. Euler/Navier-Stokes calculations of transonic flow past fixed- and rotary-wing aircraft configurations

    NASA Technical Reports Server (NTRS)

    Deese, J. E.; Agarwal, R. K.

    1989-01-01

    Computational fluid dynamics has an increasingly important role in the design and analysis of aircraft as computer hardware becomes faster and algorithms become more efficient. Progress is being made in two directions: more complex and realistic configurations are being treated and algorithms based on higher approximations to the complete Navier-Stokes equations are being developed. The literature indicates that linear panel methods can model detailed, realistic aircraft geometries in flow regimes where this approximation is valid. As algorithms including higher approximations to the Navier-Stokes equations are developed, computer resource requirements increase rapidly. Generation of suitable grids become more difficult and the number of grid points required to resolve flow features of interest increases. Recently, the development of large vector computers has enabled researchers to attempt more complex geometries with Euler and Navier-Stokes algorithms. The results of calculations for transonic flow about a typical transport and fighter wing-body configuration using thin layer Navier-Stokes equations are described along with flow about helicopter rotor blades using both Euler/Navier-Stokes equations.

  18. Optimizing interconnections to maximize the spectral radius of interdependent networks

    NASA Astrophysics Data System (ADS)

    Chen, Huashan; Zhao, Xiuyan; Liu, Feng; Xu, Shouhuai; Lu, Wenlian

    2017-03-01

    The spectral radius (i.e., the largest eigenvalue) of the adjacency matrices of complex networks is an important quantity that governs the behavior of many dynamic processes on the networks, such as synchronization and epidemics. Studies in the literature focused on bounding this quantity. In this paper, we investigate how to maximize the spectral radius of interdependent networks by optimally linking k internetwork connections (or interconnections for short). We derive formulas for the estimation of the spectral radius of interdependent networks and employ these results to develop a suite of algorithms that are applicable to different parameter regimes. In particular, a simple algorithm is to link the k nodes with the largest k eigenvector centralities in one network to the node in the other network with a certain property related to both networks. We demonstrate the applicability of our algorithms via extensive simulations. We discuss the physical implications of the results, including how the optimal interconnections can more effectively decrease the threshold of epidemic spreading in the susceptible-infected-susceptible model and the threshold of synchronization of coupled Kuramoto oscillators.

  19. Analysis of Effectiveness of Phoenix Entry Reaction Control System

    NASA Technical Reports Server (NTRS)

    Dyakonov, Artem A.; Glass, Christopher E.; Desai, Prasun, N.; VanNorman, John W.

    2008-01-01

    Interaction between the external flowfield and the reaction control system (RCS) thruster plumes of the Phoenix capsule during entry has been investigated. The analysis covered rarefied, transitional, hypersonic and supersonic flight regimes. Performance of pitch, yaw and roll control authority channels was evaluated, with specific emphasis on the yaw channel due to its low nominal yaw control authority. Because Phoenix had already been constructed and its RCS could not be modified before flight, an assessment of RCS efficacy along the trajectory was needed to determine possible issues and to make necessary software changes. Effectiveness of the system at various regimes was evaluated using a hybrid DSMC-CFD technique, based on DSMC Analysis Code (DAC) code and General Aerodynamic Simulation Program (GASP), the LAURA (Langley Aerothermal Upwind Relaxation Algorithm) code, and the FUN3D (Fully Unstructured 3D) code. Results of the analysis at hypersonic and supersonic conditions suggest a significant aero-RCS interference which reduced the efficacy of the thrusters and could likely produce control reversal. Very little aero-RCS interference was predicted in rarefied and transitional regimes. A recommendation was made to the project to widen controller system deadbands to minimize (if not eliminate) the use of RCS thrusters through hypersonic and supersonic flight regimes, where their performance would be uncertain.

  20. Impulse position control algorithms for nonlinear systems

    NASA Astrophysics Data System (ADS)

    Sesekin, A. N.; Nepp, A. N.

    2015-11-01

    The article is devoted to the formalization and description of impulse-sliding regime in nonlinear dynamical systems that arise in the application of impulse position controls of a special kind. The concept of trajectory impulse-sliding regime formalized as some limiting network element Euler polygons generated by a discrete approximation of the impulse position control This paper differs from the previously published papers in that it uses a definition of solutions of systems with impulse controls, it based on the closure of the set of smooth solutions in the space of functions of bounded variation. The need for the study of such regimes is the fact that they often arise when parry disturbances acting on technical or economic control system.

  1. The regime of biomass burning aerosols over the Mediterranean basin based on satellite observations

    NASA Astrophysics Data System (ADS)

    Kalaitzi, Nikoleta; Gkikas, Antonis; Papadimas, Christos. D.; Hatzianastassiou, Nikolaos; Torres, Omar; Mihalopoulos, Nikolaos

    2016-04-01

    Biomass burning (BB) aerosol particles have significant effects on global and regional climate, as well as on regional air quality, visibility, cloud processes and human health.Biomass burning contributes by about 40% to the global emission of black carbonBC, and BB aerosols can exert a significant positive radiative forcing. The BB aerosols can originate from natural fires and human induced burning, such as wood or agricultural waste. However, the magnitude, but also the sign of the radiative forcing of BB aerosols is still uncertain, according to the third assessment report of IPCC (2013). Moreover, there are significant differences between different models as to their representation (inventories) of BB aerosols, more than for others, e.g. of fossil fuel origin. Therefore, it is important to better understand the spatial and temporal regime of BB aerosols. This is attempted here for the broader Mediterranean basin, which is a very interesting study area for aerosols, also being one of the most climaticallysensitive world regions. The determination of spatial and temporal regime of Mediterranean BB aerosols premises the identification of these particles at a complete spatial and long temporal coverage. Such a complete coverage is only ensured by contemporary satellite observations, which offer a challenging ability to characterize the existence of BB aerosols. This is possible thanks to the current availability of derived satellite products offering information on the size and absorption/scattering ability of aerosol particles. A synergistic use of such satellite aerosol data is made here, in conjunction with a developed algorithm, in order to identify the existence of BB aerosols over the Mediterranean basin over the 11-year period from 2005 to 2015. The algorithm operates, on a daily basis and at 1°×1°latitude-longitude resolution, setting threshold values (criteria) for specific physical and optical properties, which are representative of BB aerosols. More specifically, the algorithm examines the fulfillment of these criteria for Ångström Exponent (AE), Fine Fraction (FF) and Aerosol Index (AI). The AE and FF data, which are characteristic of the aerosol size, are derived from multispectralCollection 006 MODIS-AquaAerosol Optical Depth (AOD) data, whereas the AI data, that characterize the absorption ability of aerosols, are taken from the OMI-Aura database. The algorithm enables the identification of BB aerosols over specific geographical cells (pixels) throughout the study region, over both sea and land surfaces, during days of the 2005-2015 period. The results make possible the construction of a climatological-like database of Mediterranean BB aerosols, permitting to perceive the geographical patterns of their regime, namely the areas in which they occur, in relation to their timing, i.e. the months and seasons of their occurrence. This regime is quantified, which means that the frequency (absolute and percent) of occurrence of BB aerosols is calculated, along with the associated computed AOD values. The year by year variability of BB aerosols is also investigated over the period 2005-2015, with emphasis to inter-annual and seasonal tendencies.

  2. Optimal two-stage dynamic treatment regimes from a classification perspective with censored survival data.

    PubMed

    Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie

    2018-05-18

    Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.

  3. Control of retinal isomerization in bacteriorhodopsin in the high-intensity regime

    PubMed Central

    Florean, Andrei C.; Cardoza, David; White, James L.; Lanyi, J. K.; Sension, Roseanne J.; Bucksbaum, Philip H.

    2009-01-01

    A learning algorithm was used to manipulate optical pulse shapes and optimize retinal isomerization in bacteriorhodopsin, for excitation levels up to 1.8 × 1016 photons per square centimeter. Below 1/3 the maximum excitation level, the yield was not sensitive to pulse shape. Above this level the learning algorithm found that a Fourier-transform-limited (TL) pulse maximized the 13-cis population. For this optimal pulse the yield increases linearly with intensity well beyond the saturation of the first excited state. To understand these results we performed systematic searches varying the chirp and energy of the pump pulses while monitoring the isomerization yield. The results are interpreted including the influence of 1-photon and multiphoton transitions. The population dynamics in each intermediate conformation and the final branching ratio between the all-trans and 13-cis isomers are modified by changes in the pulse energy and duration. PMID:19564608

  4. Application of Output Predictive Algorithmic Control to a Terrain Following Aircraft System.

    DTIC Science & Technology

    1982-03-01

    non-linear regime the results from an optimal control solution may be questionable. 15 -**—• - •*- "•—"".’" CHAPTER 3 Output Prpdirl- ivf ...strongly influenced by two other factors as well - the sample time T and the least-squares cost function Q. unlike the deadbeat control law of Ref...design of aircraft control systems since these methods offer tremendous insight into the dynamic behavior of the system at relatively low cost . However

  5. A hybrid algorithm for coupling partial differential equation and compartment-based dynamics.

    PubMed

    Harrison, Jonathan U; Yates, Christian A

    2016-09-01

    Stochastic simulation methods can be applied successfully to model exact spatio-temporally resolved reaction-diffusion systems. However, in many cases, these methods can quickly become extremely computationally intensive with increasing particle numbers. An alternative description of many of these systems can be derived in the diffusive limit as a deterministic, continuum system of partial differential equations (PDEs). Although the numerical solution of such PDEs is, in general, much more efficient than the full stochastic simulation, the deterministic continuum description is generally not valid when copy numbers are low and stochastic effects dominate. Therefore, to take advantage of the benefits of both of these types of models, each of which may be appropriate in different parts of a spatial domain, we have developed an algorithm that can be used to couple these two types of model together. This hybrid coupling algorithm uses an overlap region between the two modelling regimes. By coupling fluxes at one end of the interface and using a concentration-matching condition at the other end, we ensure that mass is appropriately transferred between PDE- and compartment-based regimes. Our methodology gives notable reductions in simulation time in comparison with using a fully stochastic model, while maintaining the important stochastic features of the system and providing detail in appropriate areas of the domain. We test our hybrid methodology robustly by applying it to several biologically motivated problems including diffusion and morphogen gradient formation. Our analysis shows that the resulting error is small, unbiased and does not grow over time. © 2016 The Authors.

  6. A hybrid algorithm for coupling partial differential equation and compartment-based dynamics

    PubMed Central

    Yates, Christian A.

    2016-01-01

    Stochastic simulation methods can be applied successfully to model exact spatio-temporally resolved reaction–diffusion systems. However, in many cases, these methods can quickly become extremely computationally intensive with increasing particle numbers. An alternative description of many of these systems can be derived in the diffusive limit as a deterministic, continuum system of partial differential equations (PDEs). Although the numerical solution of such PDEs is, in general, much more efficient than the full stochastic simulation, the deterministic continuum description is generally not valid when copy numbers are low and stochastic effects dominate. Therefore, to take advantage of the benefits of both of these types of models, each of which may be appropriate in different parts of a spatial domain, we have developed an algorithm that can be used to couple these two types of model together. This hybrid coupling algorithm uses an overlap region between the two modelling regimes. By coupling fluxes at one end of the interface and using a concentration-matching condition at the other end, we ensure that mass is appropriately transferred between PDE- and compartment-based regimes. Our methodology gives notable reductions in simulation time in comparison with using a fully stochastic model, while maintaining the important stochastic features of the system and providing detail in appropriate areas of the domain. We test our hybrid methodology robustly by applying it to several biologically motivated problems including diffusion and morphogen gradient formation. Our analysis shows that the resulting error is small, unbiased and does not grow over time. PMID:27628171

  7. There Are (super)Giants in the Sky: Searching for Misidentified Massive Stars in Algorithmically-Selected Quasar Catalogs

    NASA Astrophysics Data System (ADS)

    Dorn-Wallenstein, Trevor Z.; Levesque, Emily

    2017-11-01

    Thanks to incredible advances in instrumentation, surveys like the Sloan Digital Sky Survey have been able to find and catalog billions of objects, ranging from local M dwarfs to distant quasars. Machine learning algorithms have greatly aided in the effort to classify these objects; however, there are regimes where these algorithms fail, where interesting oddities may be found. We present here an X-ray bright quasar misidentified as a red supergiant/X-ray binary, and a subsequent search of the SDSS quasar catalog for X-ray bright stars misidentified as quasars.

  8. Leading-Edge Flow Sensing for Aerodynamic Parameter Estimation

    NASA Astrophysics Data System (ADS)

    Saini, Aditya

    The identification of inflow air data quantities such as airspeed, angle of attack, and local lift coefficient on various sections of a wing or rotor blade provides the capability for load monitoring, aerodynamic diagnostics, and control on devices ranging from air vehicles to wind turbines. Real-time measurement of aerodynamic parameters during flight provides the ability to enhance aircraft operating capabilities while preventing dangerous stall situations. This thesis presents a novel Leading-Edge Flow Sensing (LEFS) algorithm for the determination of the air -data parameters using discrete surface pressures measured at a few ports in the vicinity of the leading edge of a wing or blade section. The approach approximates the leading-edge region of the airfoil as a parabola and uses pressure distribution from the exact potential-ow solution for the parabola to _t the pressures measured from the ports. Pressures sensed at five discrete locations near the leading edge of an airfoil are given as input to the algorithm to solve the model using a simple nonlinear regression. The algorithm directly computes the inflow velocity, the stagnation-point location, section angle of attack and lift coefficient. The performance of the algorithm is assessed using computational and experimental data in the literature for airfoils under different ow conditions. The results show good correlation between the actual and predicted aerodynamic quantities within the pre-stall regime, even for a rotating blade section. Sensing the deviation of the aerodynamic behavior from the linear regime requires additional information on the location of ow separation on the airfoil surface. Bio-inspired artificial hair sensors were explored as a part of the current research for stall detection. The response of such artificial micro-structures can identify critical ow characteristics, which relate directly to the stall behavior. The response of the microfences was recorded via an optical microscope for ow over a at plate at different freestream velocities in the NCSU subsonic wind tunnel. Experiments were also conducted to characterize the directional sensitivity of the microstructures by creating ow reversal at the sensor location to assess the sensor response. The results show that the direction of microfence deflection correctly reflects the local ow behavior as the ow direction is reversed at the sensor location and the magnitude of deflection correlates qualitatively to an increase in the freestream velocity. The knowledge of the ow-separation location integrated with the LEFS algorithm allows the possibility of extending the LEFS analysis to post-stall flight regimes, which is explored in the current work. Finally, the application of the LEFS algorithm to unsteady aerodynamics is investigated to identify the critical sequence of events associated with the formation of leading-edge vortices. Signatures of vortex formation on the airfoil surface can be captured in the surface-pressure measurements. Real-time knowledge of the unsteady ow phenomena holds significant potential for exploiting the enhanced-lift characteristics related to vortex formation and inhibiting the detrimental effects of dynamic stall in engineering applications such as helicopters, wind turbines, bio-inspired flight, and energy harvesting devices. Computational data was used to assess the capability of the LEFS outputs to identity the signatures associated with vortex formation, i.e. onset of vortex shedding, detachment, and termination. The results demonstrate useful correlation between the LEFS outputs and the LEV signatures.

  9. Navier-Stokes simulation with constraint forces: finite-difference method for particle-laden flows and complex geometries.

    PubMed

    Höfler, K; Schwarzer, S

    2000-06-01

    Building on an idea of Fogelson and Peskin [J. Comput. Phys. 79, 50 (1988)] we describe the implementation and verification of a simulation technique for systems of non-Brownian particles in fluids at Reynolds numbers up to about 20 on the particle scale. This direct simulation technique fills a gap between simulations in the viscous regime and high-Reynolds-number modeling. It also combines sufficient computational accuracy with numerical efficiency and allows studies of several thousand, in principle arbitrarily shaped, extended and hydrodynamically interacting particles on regular work stations. We verify the algorithm in two and three dimensions for (i) single falling particles and (ii) a fluid flowing through a bed of fixed spheres. In the context of sedimentation we compute the volume fraction dependence of the mean sedimentation velocity. The results are compared with experimental and other numerical results both in the viscous and inertial regime and we find very satisfactory agreement.

  10. Non-commutative Chern numbers for generic aperiodic discrete systems

    NASA Astrophysics Data System (ADS)

    Bourne, Chris; Prodan, Emil

    2018-06-01

    The search for strong topological phases in generic aperiodic materials and meta-materials is now vigorously pursued by the condensed matter physics community. In this work, we first introduce the concept of patterned resonators as a unifying theoretical framework for topological electronic, photonic, phononic etc (aperiodic) systems. We then discuss, in physical terms, the philosophy behind an operator theoretic analysis used to systematize such systems. A model calculation of the Hall conductance of a 2-dimensional amorphous lattice is given, where we present numerical evidence of its quantization in the mobility gap regime. Motivated by such facts, we then present the main result of our work, which is the extension of the Chern number formulas to Hamiltonians associated to lattices without a canonical labeling of the sites, together with index theorems that assure the quantization and stability of these Chern numbers in the mobility gap regime. Our results cover a broad range of applications, in particular, those involving quasi-crystalline, amorphous as well as synthetic (i.e. algorithmically generated) lattices.

  11. Quantum computing applied to calculations of molecular energies: CH2 benchmark.

    PubMed

    Veis, Libor; Pittner, Jiří

    2010-11-21

    Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peter, Justin R; May, Peter T; Potts, Rodney J

    Statistics of radar-retrievals of precipitation are presented. A K-means clustering algorithm is applied to an historical record of radiosonde measurements which identified three major synoptic regimes; a dry, stable regime with mainly westerly winds prevalent during winter, a moist south easterly trade wind regime and a moist northerly regime both prevalent during summer. These are referred to as westerly, trade wind and northerly regimes, respectively. Cell statistics are calculated using an objective cell identification and tracking methodology on data obtained from a nearby S-band radar. Cell statistics are investigated for the entire radar observational period and also during sub-periods correspondingmore » to the three major synoptic regimes. The statistics investigated are cell initiation location, area, rainrate, volume, height, height of the maximum reflectivity, volume greater than 40 dBZ and storm speed and direction. Cells are found predominantly along the elevated topography. The cell statistics reveal that storms which form in the dry, stable westerly regime are of comparable size to the deep cells which form in the northerly regime, larger than those in the trade regime and, furthermore, have the largest rainrate. However, they occur less frequently and have shorter lifetimes than cells in the other regimes. Diurnal statistics of precipitation area and rainrate exhibit early morning and mid afternoon peaks, although the areal coverage lags the rainrate by several hours indicative of a transition from convective to stratiform precipitation. The probability distributions of cell area, rainrate, volume, height and height of the maximum re ectivity are found to follow lognormal distributions.« less

  13. An efficient quantum algorithm for spectral estimation

    NASA Astrophysics Data System (ADS)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  14. Application of stochastic weighted algorithms to a multidimensional silica particle model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menz, William J.; Patterson, Robert I.A.; Wagner, Wolfgang

    2013-09-01

    Highlights: •Stochastic weighted algorithms (SWAs) are developed for a detailed silica model. •An implementation of SWAs with the transition kernel is presented. •The SWAs’ solutions converge to the direct simulation algorithm’s (DSA) solution. •The efficiency of SWAs is evaluated for this multidimensional particle model. •It is shown that SWAs can be used for coagulation problems in industrial systems. -- Abstract: This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associatedmore » majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83–98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.« less

  15. Stable computations with flat radial basis functions using vector-valued rational approximations

    NASA Astrophysics Data System (ADS)

    Wright, Grady B.; Fornberg, Bengt

    2017-02-01

    One commonly finds in applications of smooth radial basis functions (RBFs) that scaling the kernels so they are 'flat' leads to smaller discretization errors. However, the direct numerical approach for computing with flat RBFs (RBF-Direct) is severely ill-conditioned. We present an algorithm for bypassing this ill-conditioning that is based on a new method for rational approximation (RA) of vector-valued analytic functions with the property that all components of the vector share the same singularities. This new algorithm (RBF-RA) is more accurate, robust, and easier to implement than the Contour-Padé method, which is similarly based on vector-valued rational approximation. In contrast to the stable RBF-QR and RBF-GA algorithms, which are based on finding a better conditioned base in the same RBF-space, the new algorithm can be used with any type of smooth radial kernel, and it is also applicable to a wider range of tasks (including calculating Hermite type implicit RBF-FD stencils). We present a series of numerical experiments demonstrating the effectiveness of this new method for computing RBF interpolants in the flat regime. We also demonstrate the flexibility of the method by using it to compute implicit RBF-FD formulas in the flat regime and then using these for solving Poisson's equation in a 3-D spherical shell.

  16. Method of Determining the Aerodynamic Characteristics of a Flying Vehicle from the Surface Pressure

    NASA Astrophysics Data System (ADS)

    Volkov, V. F.; Dyad'kin, A. A.; Zapryagaev, V. I.; Kiselev, N. P.

    2017-11-01

    The paper presents a description of the procedure used for determining the aerodynamic characteristics (forces and moments acting on a model of a flying vehicle) obtained from the results of pressure measurements on the surface of a model of a re-entry vehicle with operating retrofire brake rockets in the regime of hovering over a landing surface is given. The algorithm for constructing the interpolation polynomial over interpolation nodes in the radial and azimuthal directions using the assumption on the symmetry of pressure distribution over the surface is presented. The aerodynamic forces and moments at different tilts of the vehicle are obtained. It is shown that the aerodynamic force components acting on the vehicle in the regime of landing and caused by the action of the vertical velocity deceleration nozzle jets are negligibly small in comparison with the engine thrust.

  17. Impulse position control algorithms for nonlinear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sesekin, A. N., E-mail: sesekin@list.ru; Institute of Mathematics and Mechanics, Ural Division of Russian Academy of Sciences, 16 S. Kovalevskaya, Ekaterinburg, 620990; Nepp, A. N., E-mail: anepp@urfu.ru

    2015-11-30

    The article is devoted to the formalization and description of impulse-sliding regime in nonlinear dynamical systems that arise in the application of impulse position controls of a special kind. The concept of trajectory impulse-sliding regime formalized as some limiting network element Euler polygons generated by a discrete approximation of the impulse position control This paper differs from the previously published papers in that it uses a definition of solutions of systems with impulse controls, it based on the closure of the set of smooth solutions in the space of functions of bounded variation. The need for the study of suchmore » regimes is the fact that they often arise when parry disturbances acting on technical or economic control system.« less

  18. Towards robust algorithms for current deposition and dynamic load-balancing in a GPU particle in cell code

    NASA Astrophysics Data System (ADS)

    Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio

    2012-12-01

    We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.

  19. Low-loss adiabatically-tapered high-contrast gratings for slow-wave modulators on SOI

    NASA Astrophysics Data System (ADS)

    Sciancalepore, Corrado; Hassan, Karim; Ferrotti, Thomas; Harduin, Julie; Duprez, Hélène; Menezo, Sylvie; Ben Bakir, Badhise

    2015-02-01

    In this communication, we report about the design, fabrication, and testing of Silicon-based photonic integrated circuits (Si-PICs) including low-loss flat-band slow-light high-contrast-gratings (HCGs) waveguides at 1.31 μm. The light slowdown is achieved in 300-nm-thick silicon-on-insulator (SOI) rib waveguides by patterning adiabatically-tapered highcontrast gratings, capable of providing slow-light propagation with extremely low optical losses, back-scattering, and Fabry-Pérot noise. In detail, the one-dimensional (1-D) grating architecture is capable to provide band-edge group indices ng ~ 25, characterized by overall propagation losses equivalent to those of the index-like propagation regime (~ 1-2 dB/cm). Such photonic band-edge slow-light regime at low propagation losses is made possible by the adiabatic apodization of such 1-D HCGs, thus resulting in a win-win approach where light slow-down regime is reached without additional optical losses penalty. As well as that, a tailored apodization optimized via genetic algorithms allows the flattening of slow-light regime over the wavelength window of interest, therefore suiting well needs for group index stability for modulation purposes and non-linear effects generation. In conclusion, such architectures provide key features suitable for power-efficient high-speed modulators in silicon as well as an extremely low-loss building block for non-linear optics (NLO) which is now available in the Si photonics toolbox.

  20. Calculation of hypersonic shock structure using flux-split algorithms

    NASA Technical Reports Server (NTRS)

    Eppard, W. M.; Grossman, B.

    1991-01-01

    There exists an altitude regime in the atmosphere that is within the continuum domain, but wherein the conventional Navier-Stokes equations cease to be accurate. The altitude limits for this so called continuum transition regime depend on vehicle size and speed. Within this regime the thickness of the bow shock wave is no longer negligible when compared to the shock stand-off distance and the peak radiation intensity occurs within the shock wave structure itself. For this reason it is no longer valid to treat the shock wave as a discontinuous jump and it becomes necessary to compute through the shock wave itself. To accurately calculate hypersonic flowfields, the governing equations must be capable of yielding realistic profiles of flow variables throughout the structure of a hypersonic shock wave. The conventional form of the Navier-Stokes equations is restricted to flows with only small departures from translational equilibrium; it is for this reason they do not provide the capability to accurately predict hypersonic shock structure. Calculations in the continuum transition regime, therefore, require the use of governing equations other than Navier-Stokes. Several alternatives to Navier-Stokes are discussed; first for the case of a monatomic gas and then for the case of a diatomic gas where rotational energy must be included. Results are presented for normal shock calculations with argon and nitrogen.

  1. CaSPIAN: A Causal Compressive Sensing Algorithm for Discovering Directed Interactions in Gene Networks

    PubMed Central

    Emad, Amin; Milenkovic, Olgica

    2014-01-01

    We introduce a novel algorithm for inference of causal gene interactions, termed CaSPIAN (Causal Subspace Pursuit for Inference and Analysis of Networks), which is based on coupling compressive sensing and Granger causality techniques. The core of the approach is to discover sparse linear dependencies between shifted time series of gene expressions using a sequential list-version of the subspace pursuit reconstruction algorithm and to estimate the direction of gene interactions via Granger-type elimination. The method is conceptually simple and computationally efficient, and it allows for dealing with noisy measurements. Its performance as a stand-alone platform without biological side-information was tested on simulated networks, on the synthetic IRMA network in Saccharomyces cerevisiae, and on data pertaining to the human HeLa cell network and the SOS network in E. coli. The results produced by CaSPIAN are compared to the results of several related algorithms, demonstrating significant improvements in inference accuracy of documented interactions. These findings highlight the importance of Granger causality techniques for reducing the number of false-positives, as well as the influence of noise and sampling period on the accuracy of the estimates. In addition, the performance of the method was tested in conjunction with biological side information of the form of sparse “scaffold networks”, to which new edges were added using available RNA-seq or microarray data. These biological priors aid in increasing the sensitivity and precision of the algorithm in the small sample regime. PMID:24622336

  2. An efficient algorithm for the retarded time equation for noise from rotating sources

    NASA Astrophysics Data System (ADS)

    Loiodice, S.; Drikakis, D.; Kokkalis, A.

    2018-01-01

    This study concerns modelling of noise emanating from rotating sources such as helicopter rotors. We present an accurate and efficient algorithm for the solution of the retarded time equation, which can be used both in subsonic and supersonic flow regimes. A novel approach for the search of the roots of the retarded time function was developed based on considerations of the kinematics of rotating sources and of the bifurcation analysis of the retarded time function. It is shown that the proposed algorithm is faster than the classical Newton and Brent methods, especially in the presence of sources rotating supersonically.

  3. Airway and tissue loading in postinterrupter response of the respiratory system - an identification algorithm construction.

    PubMed

    Jablonski, Ireneusz; Mroczka, Janusz

    2010-01-01

    The paper offers an enhancement of the classical interrupter technique algorithm dedicated to respiratory mechanics measurements. Idea consists in exploitation of information contained in postocclusional transient states during indirect measurement of parameter characteristics by model identification. It needs the adequacy of an inverse analogue to general behavior of the real system and a reliable algorithm of parameter estimation. The second one was a subject of reported works, which finally showed the potential of the approach to separation of airway and tissue response in a case of short-term excitation by interrupter valve operation. Investigations were conducted in a regime of forward-inverse computer experiment.

  4. CREKID: A computer code for transient, gas-phase combustion of kinetics

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.; Radhakrishnan, K.

    1984-01-01

    A new algorithm was developed for fast, automatic integration of chemical kinetic rate equations describing homogeneous, gas-phase combustion at constant pressure. Particular attention is paid to the distinguishing physical and computational characteristics of the induction, heat-release and equilibration regimes. The two-part predictor-corrector algorithm, based on an exponentially-fitted trapezoidal rule, includes filtering of ill-posed initial conditions, automatic selection of Newton-Jacobi or Newton iteration for convergence to achieve maximum computational efficiency while observing a prescribed error tolerance. The new algorithm was found to compare favorably with LSODE on two representative test problems drawn from combustion kinetics.

  5. Properties of bosons in a one-dimensional bichromatic optical lattice in the regime of the pinning transition: A worm-algorithm Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Sakhel, Asaad R.

    2016-09-01

    The sensitivity of the pinning transition (PT) as described by the sine-Gordon model of strongly interacting bosons confined in a shallow, one-dimensional, periodic optical lattice (OL), is examined against perturbations of the OL. The PT has been recently realized experimentally by Haller et al. [Nature (London) 466, 597 (2010), 10.1038/nature09259] and is the exact opposite of the superfluid-to-Mott-insulator transition in a deep OL with weakly interacting bosons. The continuous-space worm-algorithm (WA) Monte Carlo method [Boninsegni et al., Phys. Rev. E 74, 036701 (2006), 10.1103/PhysRevE.74.036701] is applied for the present examination. It is found that the WA is able to reproduce the PT, which is another manifestation of the power of continuous-space WA methods in capturing the physics of phase transitions. In order to examine the sensitivity of the PT, it is tweaked by the addition of the secondary OL. The resulting bichromatic optical lattice (BCOL) is considered with a rational ratio of the constituting wavelengths λ1 and λ2 in contrast to the commonly used irrational ratio. For a weak BCOL, it is chiefly demonstrated that this PT is robust against the introduction of a weaker, secondary OL. The system is explored numerically by scanning its properties in a range of the Lieb-Liniger interaction parameter γ in the regime of the PT. It is argued that there should not be much difference in the results between those due to an irrational ratio λ1/λ2 and those due to a rational approximation of the latter, bringing this in line with a recent statement by Boers et al. [Phys. Rev. A 75, 063404 (2007), 10.1103/PhysRevA.75.063404]. The correlation function, Matsubara Green's function (MGF), and the single-particle density matrix do not respond to changes in the depth of the secondary OL V1. For a stronger BCOL, however, a response is observed because of changes in V1. In the regime where the bosons are fermionized, the MGF reveals that hole excitations are favored over particle excitations manifesting that holes in the PT regime play an important role in the response of properties to changes in γ .

  6. Detecting dust hits at Enceladus, Saturn and beyond using CAPS / ELS data from Cassini

    NASA Astrophysics Data System (ADS)

    Vandegriff, J. D.; Stoneberger, P. J.; Jones, G.; Waite, J. H., Jr.

    2016-12-01

    It has recently been shown (1) that the impact of hypervelocity dust grains on the Cassini spacecraft can be detected by the Cassini Plasma Spectrometer (CAPS) Electron Spectrometer (ELS) instrument. For multiple Enceladus flybys, fine scale features in the lower energy regime of ELS energy spectra can be explained as short-duration, isotropic plasma clouds due to dust impacts. We have developed an algorithm for detecting these hypervelocity dust impacts, and the list of such impacts during Enceladus flybys will be presented. We also present preliminary results obtained when using the algorithm to search for dust impacts in other regions of Saturn's magnetosphere as well as in the solar wind. (1) Jones, Geraint, Hypervelocity dust impact signatures detected by Cassini CAPS-ELS in the Enceladus plume, MOP Meeting, June 1-5, 2015, Atlanta, GA

  7. Physics Based Model for Cryogenic Chilldown and Loading. Part I: Algorithm

    NASA Technical Reports Server (NTRS)

    Luchinsky, Dmitry G.; Smelyanskiy, Vadim N.; Brown, Barbara

    2014-01-01

    We report the progress in the development of the physics based model for cryogenic chilldown and loading. The chilldown and loading is model as fully separated non-equilibrium two-phase flow of cryogenic fluid thermally coupled to the pipe walls. The solution follow closely nearly-implicit and semi-implicit algorithms developed for autonomous control of thermal-hydraulic systems developed by Idaho National Laboratory. A special attention is paid to the treatment of instabilities. The model is applied to the analysis of chilldown in rapid loading system developed at NASA-Kennedy Space Center. The nontrivial characteristic feature of the analyzed chilldown regime is its active control by dump valves. The numerical predictions are in reasonable agreement with the experimental time traces. The obtained results pave the way to the development of autonomous loading operation on the ground and space.

  8. Statistical Mechanics of Combinatorial Auctions

    NASA Astrophysics Data System (ADS)

    Galla, Tobias; Leone, Michele; Marsili, Matteo; Sellitto, Mauro; Weigt, Martin; Zecchina, Riccardo

    2006-09-01

    Combinatorial auctions are formulated as frustrated lattice gases on sparse random graphs, allowing the determination of the optimal revenue by methods of statistical physics. Transitions between computationally easy and hard regimes are found and interpreted in terms of the geometric structure of the space of solutions. We introduce an iterative algorithm to solve intermediate and large instances, and discuss competing states of optimal revenue and maximal number of satisfied bidders. The algorithm can be generalized to the hard phase and to more sophisticated auction protocols.

  9. Characterizing Arctic Sea Ice Topography Using High-Resolution IceBridge Data

    NASA Technical Reports Server (NTRS)

    Petty, Alek; Tsamados, Michel; Kurtz, Nathan; Farrell, Sinead; Newman, Thomas; Harbeck, Jeremy; Feltham, Daniel; Richter-Menge, Jackie

    2016-01-01

    We present an analysis of Arctic sea ice topography using high resolution, three-dimensional, surface elevation data from the Airborne Topographic Mapper, flown as part of NASA's Operation IceBridge mission. Surface features in the sea ice cover are detected using a newly developed surface feature picking algorithm. We derive information regarding the height, volume and geometry of surface features from 2009-2014 within the Beaufort/Chukchi and Central Arctic regions. The results are delineated by ice type to estimate the topographic variability across first-year and multi-year ice regimes.

  10. Optically-derived estimates of phytoplankton size class and taxonomic group biomass in the Eastern Subarctic Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Zeng, Chen; Rosengard, Sarah Z.; Burt, William; Peña, M. Angelica; Nemcek, Nina; Zeng, Tao; Arrigo, Kevin R.; Tortell, Philippe D.

    2018-06-01

    We evaluate several algorithms for the estimation of phytoplankton size class (PSC) and functional type (PFT) biomass from ship-based optical measurements in the Subarctic Northeast Pacific Ocean. Using underway measurements of particulate absorption and backscatter in surface waters, we derived estimates of PSC/PFT based on chlorophyll-a concentrations (Chl-a), particulate absorption spectra and the wavelength dependence of particulate backscatter. Optically-derived [Chl-a] and phytoplankton absorption measurements were validated against discrete calibration samples, while the derived PSC/PFT estimates were validated using size-fractionated Chl-a measurements and HPLC analysis of diagnostic photosynthetic pigments (DPA). Our results showflo that PSC/PFT algorithms based on [Chl-a] and particulate absorption spectra performed significantly better than the backscatter slope approach. These two more successful algorithms yielded estimates of phytoplankton size classes that agreed well with HPLC-derived DPA estimates (RMSE = 12.9%, and 16.6%, respectively) across a range of hydrographic and productivity regimes. Moreover, the [Chl-a] algorithm produced PSC estimates that agreed well with size-fractionated [Chl-a] measurements, and estimates of the biomass of specific phytoplankton groups that were consistent with values derived from HPLC. Based on these results, we suggest that simple [Chl-a] measurements should be more fully exploited to improve the classification of phytoplankton assemblages in the Northeast Pacific Ocean.

  11. Systemic risk and spatiotemporal dynamics of the US housing market.

    PubMed

    Meng, Hao; Xie, Wen-Jie; Jiang, Zhi-Qiang; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H Eugene

    2014-01-13

    Housing markets play a crucial role in economies and the collapse of a real-estate bubble usually destabilizes the financial system and causes economic recessions. We investigate the systemic risk and spatiotemporal dynamics of the US housing market (1975-2011) at the state level based on the Random Matrix Theory (RMT). We identify richer economic information in the largest eigenvalues deviating from RMT predictions for the housing market than for stock markets and find that the component signs of the eigenvectors contain either geographical information or the extent of differences in house price growth rates or both. By looking at the evolution of different quantities such as eigenvalues and eigenvectors, we find that the US housing market experienced six different regimes, which is consistent with the evolution of state clusters identified by the box clustering algorithm and the consensus clustering algorithm on the partial correlation matrices. We find that dramatic increases in the systemic risk are usually accompanied by regime shifts, which provide a means of early detection of housing bubbles.

  12. Evolution of semilocal string networks. II. Velocity estimators

    NASA Astrophysics Data System (ADS)

    Lopez-Eiguren, A.; Urrestilla, J.; Achúcarro, A.; Avgoustidis, A.; Martins, C. J. A. P.

    2017-07-01

    We continue a comprehensive numerical study of semilocal string networks and their cosmological evolution. These can be thought of as hybrid networks comprised of (nontopological) string segments, whose core structure is similar to that of Abelian Higgs vortices, and whose ends have long-range interactions and behavior similar to that of global monopoles. Our study provides further evidence of a linear scaling regime, already reported in previous studies, for the typical length scale and velocity of the network. We introduce a new algorithm to identify the position of the segment cores. This allows us to determine the length and velocity of each individual segment and follow their evolution in time. We study the statistical distribution of segment lengths and velocities for radiation- and matter-dominated evolution in the regime where the strings are stable. Our segment detection algorithm gives higher length values than previous studies based on indirect detection methods. The statistical distribution shows no evidence of (anti)correlation between the speed and the length of the segments.

  13. Towards full-Braginskii implicit extended MHD

    NASA Astrophysics Data System (ADS)

    Chacon, Luis

    2009-05-01

    Recently, viable algorithms have been proposed for the scalable, fully-implicit temporal integration of 3D resistive MHD and cold-ion extended MHD models. While significant, these achievements must be tempered by the fact that such models lack predictive capabilities in regimes of interest for magnetic fusion. Short of including kinetic closures, a natural evolution path towards predictability starts by considering additional terms as described in Braginskii's fluid closures in the collisional regime. Here, we focus on the inclusion of two fundamental elements of relevance for fusion plasmas: anisotropic parallel electron transport, and warm-ion physics (i.e., ion finite Larmor radius effects, included via gyroviscosity). Both these elements introduce significant numerical difficulties, due to the strong anisotropy in the former, and the presence of dispersive waves in the latter. In this presentation, we will discuss progress in our fully implicit algorithmic formulation towards the inclusion of both these elements. L. Chac'on, Phys. Plasmas, 15, 056103 (2008) L. Chac'on, J. Physics: Conf. Series, 125, 012041 (2008)

  14. SHARP: A Spatially Higher-order, Relativistic Particle-in-cell Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalaby, Mohamad; Broderick, Avery E.; Chang, Philip

    Numerical heating in particle-in-cell (PIC) codes currently precludes the accurate simulation of cold, relativistic plasma over long periods, severely limiting their applications in astrophysical environments. We present a spatially higher-order accurate relativistic PIC algorithm in one spatial dimension, which conserves charge and momentum exactly. We utilize the smoothness implied by the usage of higher-order interpolation functions to achieve a spatially higher-order accurate algorithm (up to the fifth order). We validate our algorithm against several test problems—thermal stability of stationary plasma, stability of linear plasma waves, and two-stream instability in the relativistic and non-relativistic regimes. Comparing our simulations to exact solutionsmore » of the dispersion relations, we demonstrate that SHARP can quantitatively reproduce important kinetic features of the linear regime. Our simulations have a superior ability to control energy non-conservation and avoid numerical heating in comparison to common second-order schemes. We provide a natural definition for convergence of a general PIC algorithm: the complement of physical modes captured by the simulation, i.e., those that lie above the Poisson noise, must grow commensurately with the resolution. This implies that it is necessary to simultaneously increase the number of particles per cell and decrease the cell size. We demonstrate that traditional ways for testing for convergence fail, leading to plateauing of the energy error. This new PIC code enables us to faithfully study the long-term evolution of plasma problems that require absolute control of the energy and momentum conservation.« less

  15. Automatic Data Filter Customization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  16. Simulation of 3-D Nonequilibrium Seeded Air Flow in the NASA-Ames MHD Channel

    NASA Technical Reports Server (NTRS)

    Gupta, Sumeet; Tannehill, John C.; Mehta, Unmeel B.

    2004-01-01

    The 3-D nonequilibrium seeded air flow in the NASA-Ames experimental MHD channel has been numerically simulated. The channel contains a nozzle section, a center section, and an accelerator section where magnetic and electric fields can be imposed on the flow. In recent tests, velocity increases of up to 40% have been achieved in the accelerator section. The flow in the channel is numerically computed us ing a 3-D parabolized Navier-Stokes (PNS) algorithm that has been developed to efficiently compute MHD flows in the low magnetic Reynolds number regime: The MHD effects are modeled by introducing source terms into the PNS equations which can then be solved in a very efficient manner. The algorithm has been extended in the present study to account for nonequilibrium seeded air flows. The electrical conductivity of the flow is determined using the program of Park. The new algorithm has been used to compute two test cases that match the experimental conditions. In both cases, magnetic and electric fields are applied to the seeded flow. The computed results are in good agreement with the experimental data.

  17. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE PAGES

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  18. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gunney, Brian T.N.; Anderson, Robert W.

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  19. A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.

    2002-01-01

    Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the robustness of WAVDETECT by applying it to an image from an idealized detector with a spatially invariant Gaussian PSF and an exposure map similar to that of the Einstein IPC; to Pleiades Cluster data collected by the ROSAT PSPC; and to simulated Chandra ACIS-I image of the Lockman Hole region.

  20. Modeling: The Right Tool for the Job.

    ERIC Educational Resources Information Center

    Gavanasen, Varut; Hussain, S. Tariq

    1993-01-01

    Reviews the different types of models that can be used in groundwater modeling. Discusses the flow and contaminant transport models in the saturated zone, flow and contaminant transport in variably saturated flow regime, vapor transport, biotransformation models, multiphase models, optimization algorithms, and potentials pitfalls of using these…

  1. Fourier phase retrieval with a single mask by Douglas-Rachford algorithms.

    PubMed

    Chen, Pengwen; Fannjiang, Albert

    2018-05-01

    The Fourier-domain Douglas-Rachford (FDR) algorithm is analyzed for phase retrieval with a single random mask. Since the uniqueness of phase retrieval solution requires more than a single oversampled coded diffraction pattern, the extra information is imposed in either of the following forms: 1) the sector condition on the object; 2) another oversampled diffraction pattern, coded or uncoded. For both settings, the uniqueness of projected fixed point is proved and for setting 2) the local, geometric convergence is derived with a rate given by a spectral gap condition. Numerical experiments demonstrate global, power-law convergence of FDR from arbitrary initialization for both settings as well as for 3 or more coded diffraction patterns without oversampling. In practice, the geometric convergence can be recovered from the power-law regime by a simple projection trick, resulting in highly accurate reconstruction from generic initialization.

  2. Averaging scheme for atomic resolution off-axis electron holograms.

    PubMed

    Niermann, T; Lehmann, M

    2014-08-01

    All micrographs are limited by shot-noise, which is intrinsic to the detection process of electrons. For beam insensitive specimen this limitation can in principle easily be circumvented by prolonged exposure times. However, in the high-resolution regime several instrumental instabilities limit the applicable exposure time. Particularly in the case of off-axis holography the holograms are highly sensitive to the position and voltage of the electron-optical biprism. We present a novel reconstruction algorithm to average series of off-axis holograms while compensating for specimen drift, biprism drift, drift of biprism voltage, and drift of defocus, which all might cause problematic changes from exposure to exposure. We show an application of the algorithm utilizing also the possibilities of double biprism holography, which results in a high quality exit-wave reconstruction with 75 pm resolution at a very high signal-to-noise ratio. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Intrabreed Stratification Related to Divergent Selection Regimes in Purebred Dogs May Affect the Interpretation of Genetic Association Studies

    PubMed Central

    Chang, Melanie L.; Yokoyama, Jennifer S.; Branson, Nick; Dyer, Donna J.; Hitte, Christophe; Overall, Karen L.

    2009-01-01

    Until recently, canine genetic research has not focused on population structure within breeds, which may confound the results of case–control studies by introducing spurious correlations between phenotype and genotype that reflect population history. Intrabreed structure may exist when geographical origin or divergent selection regimes influence the choices of potential mates for breeding dogs. We present evidence for intrabreed stratification from a genome-wide marker survey in a sample of unrelated dogs. We genotyped 76 Border Collies, 49 Australian Shepherds, 17 German Shepherd Dogs, and 17 Portuguese Water Dogs for our primary analyses using Affymetrix Canine v2.0 single-nucleotide polymorphism (SNP) arrays. Subsets of autosomal markers were examined using clustering algorithms to facilitate assignment of individuals to populations and estimation of the number of populations represented in the sample. SNPs passing stringent quality control filters were employed for explicitly phylogenetic analyses reconstructing relationships between individuals using maximum parsimony and Bayesian methods. We used simulation studies to explore the possible effects of intrabreed stratification on genome-wide association studies. These analyses demonstrate significant stratification in at least one of our primary breeds of interest, the Border Collie. Demographic and pedigree data suggest that this population substructure may result from geographic isolation or divergent selection regimes practiced by breeders with different breeding program goals. Simulation studies indicate that such stratification could result in false discovery rates significant enough to confound genome-wide association analyses. Intrabreed stratification should be accounted for when designing and interpreting the results of case–control association studies using purebred dogs.

  4. Superposition of polarized waves at layered media: theoretical modeling and measurement

    NASA Astrophysics Data System (ADS)

    Finkele, Rolf; Wanielik, Gerd

    1997-12-01

    The detection of ice layers on road surfaces is a crucial requirement for a system that is designed to warn vehicle drivers of hazardous road conditions. In the millimeter wave regime at 76 GHz the dielectric constant of ice and conventional road surface materials (i.e. asphalt, concrete) is found to be nearly similar. Thus, if the layer of ice is very thin and thus is of the same shape of roughness as the underlying road surface it cannot be securely detected using conventional algorithmic approaches. The method introduced in this paper extents and applies the theoretical work of Pancharatnam on the superposition of polarized waves. The projection of the Stokes vectors onto the Poincare sphere traces a circle due to the variation of the thickness of the ice layer. The paper presents a method that utilizes the concept of wave superposition to detect this trace even if it is corrupted by stochastic variation due to rough surface scattering. Measurement results taken under real traffic conditions prove the validity of the proposed algorithms. Classification results are presented and the results discussed.

  5. Toward a unifying framework for evolutionary processes.

    PubMed

    Paixão, Tiago; Badkobeh, Golnaz; Barton, Nick; Çörüş, Doğan; Dang, Duc-Cuong; Friedrich, Tobias; Lehre, Per Kristian; Sudholt, Dirk; Sutton, Andrew M; Trubenová, Barbora

    2015-10-21

    The theory of population genetics and evolutionary computation have been evolving separately for nearly 30 years. Many results have been independently obtained in both fields and many others are unique to its respective field. We aim to bridge this gap by developing a unifying framework for evolutionary processes that allows both evolutionary algorithms and population genetics models to be cast in the same formal framework. The framework we present here decomposes the evolutionary process into its several components in order to facilitate the identification of similarities between different models. In particular, we propose a classification of evolutionary operators based on the defining properties of the different components. We cast several commonly used operators from both fields into this common framework. Using this, we map different evolutionary and genetic algorithms to different evolutionary regimes and identify candidates with the most potential for the translation of results between the fields. This provides a unified description of evolutionary processes and represents a stepping stone towards new tools and results to both fields. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Structure formation beyond shell-crossing: nonperturbative expansions and late-time attractors

    NASA Astrophysics Data System (ADS)

    Pietroni, Massimo

    2018-06-01

    Structure formation in 1+1 dimensions is considered, with emphasis on the effects of shell-crossing. The breakdown of the perturbative expansion beyond shell-crossing is discussed, and it is shown, in a simple example, that the perturbative series can be extended to a transseries including nonperturbative terms. The latter converges to the exact result well beyond the range of validity of perturbation theory. The crucial role of the divergences induced by shell-crossing is discussed. They provide constraints on the structure of the transseries and act as a bridge between the perturbative and the nonperturbative sectors. Then, we show that the dynamics in the deep multistreaming regime is governed by attractors. In the case of simple initial conditions, these attractors coincide with the asymptotic configurations of the adhesion model, but in general they may differ. These results are applied to a cosmological setting, and an algorithm to build the attractor solution starting from the Zel'dovich approximation is developed. Finally, this algorithm is applied to the search of `haloes' and the results are compared with those obtained from the exact dynamical equations.

  7. Effect of a standardized treatment regime for infection after osteosynthesis.

    PubMed

    Hellebrekers, Pien; Leenen, Luke P H; Hoekstra, Meriam; Hietbrink, Falco

    2017-03-09

    Infection after osteosynthesis is an important complication with significant morbidity and even mortality. These infections are often caused by biofilm-producing bacteria. Treatment algorithms dictate an aggressive approach with surgical debridement and antibiotic treatment. The aim of this study is to analyze the effect of such an aggressive standardized treatment regime with implant retention for acute, existing <3 weeks, infection after osteosynthesis. We conducted a retrospective 2-year cohort in a single, level 1 trauma center on infection occurring within 12 months following any osteosynthesis surgery. The standardized treatment regime consisted of implant retention, thorough surgical debridement, and immediate antibiotic combination therapy with rifampicin. The primary outcome was success. Success was defined as consolidation of the fracture and resolved symptoms of infection. Culture and susceptibility testing were performed to identify bacteria and resistance patterns. Univariate analysis was conducted on patient-related factors in association with primary success and antibiotic resistance. Forty-nine patients were included for analysis. The primary success rate was 63% and overall success rate 88%. Factors negatively associated with primary success were the following: Gustilo classification (P = 0.023), higher number of debridements needed (P = 0.015), inability of primary closure (P = 0.017), and subsequent application of vacuum therapy (P = 0.030). Adherence to the treatment regime was positively related to primary success (P = 0.034). The described treatment protocol results in high success rates, comparable with success rates achieved in staged exchange in prosthetic joint infection treatment.

  8. Slope-scale dynamic states of rockfalls

    NASA Astrophysics Data System (ADS)

    Agliardi, F.; Crosta, G. B.

    2009-04-01

    Rockfalls are common earth surface phenomena characterised by complex dynamics at the slope scale, depending on local block kinematics and slope geometry. We investigated the nature of this slope-scale dynamics by parametric 3D numerical modelling of rockfalls over synthetic slopes with different inclination, roughness and spatial resolution. Simulations were performed through an original code specifically designed for rockfall modeling, incorporating kinematic and hybrid algorithms with different damping functions available to model local energy loss by impact and pure rolling. Modelling results in terms of average velocity profiles suggest that three dynamic regimes (i.e. decelerating, steady-state and accelerating), previously recognized in the literature through laboratory experiments on granular flows, can set up at the slope scale depending on slope average inclination and roughness. Sharp changes in rock fall kinematics, including motion type and lateral dispersion of trajectories, are associated to the transition among different regimes. Associated threshold conditions, portrayed in "phase diagrams" as slope-roughness critical lines, were analysed depending on block size, impact/rebound angles, velocity and energy, and model spatial resolution. Motion in regime B (i.e. steady state) is governed by a slope-scale "viscous friction" with average velocity linearly related to the sine of slope inclination. This suggest an analogy between rockfall motion in regime B and newtonian flow, whereas in regime C (i.e. accelerating) an analogy with a dilatant flow was observed. Thus, although local behavior of single falling blocks is well described by rigid body dynamics, the slope scale dynamics of rockfalls seem to statistically approach that of granular media. Possible outcomes of these findings include a discussion of the transition from rockfall to granular flow, the evaluation of the reliability of predictive models, and the implementation of criteria for a preliminary evaluation of hazard assessment and countermeasure planning.

  9. C-learning: A new classification framework to estimate optimal dynamic treatment regimes.

    PubMed

    Zhang, Baqun; Zhang, Min

    2017-12-11

    A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.

  10. Faster than classical quantum algorithm for dense formulas of exact satisfiability and occupation problems

    NASA Astrophysics Data System (ADS)

    Mandrà, Salvatore; Giacomo Guerreschi, Gian; Aspuru-Guzik, Alán

    2016-07-01

    We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O(\\sqrt{{2}n-{M\\prime }}) and O({2}n-{M\\prime }), where n is the number of variables and {M}\\prime the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O({2}n/4) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for (3,3)-regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O({2}31n/96). Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.

  11. Analysis of non-linear aeroelastic response of a supersonic thick fin with plunging, pinching and flapping free-plays

    NASA Astrophysics Data System (ADS)

    Firouz-Abadi, R. D.; Alavi, S. M.; Salarieh, H.

    2013-07-01

    The flutter of a 3-D rigid fin with double-wedge section and free-play in flapping, plunging and pitching degrees-of-freedom operating in supersonic and hypersonic flight speed regimes have been considered. Aerodynamic model is obtained by local usage of the piston theory behind the shock and expansion analysis, and structural model is obtained based on Lagrange equation of motion. Such model presents fast, accurate algorithm for studying the aeroelastic behavior of the thick supersonic fin in time domain. Dynamic behavior of the fin is considered over large number of parameters that characterize the aeroelastic system. Results show that the free-play in the pitching, plunging and flapping degrees-of-freedom has significant effects on the oscillation exhibited by the aeroelastic system in the supersonic/hypersonic flight speed regimes. The simulations also show that the aeroelastic system behavior is greatly affected by some parameters, such as the Mach number, thickness, angle of attack, hinge position and sweep angle.

  12. Methods of Stochastic Analysis of Complex Regimes in the 3D Hindmarsh-Rose Neuron Model

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina; Ryashko, Lev; Slepukhina, Evdokia

    A problem of the stochastic nonlinear analysis of neuronal activity is studied by the example of the Hindmarsh-Rose (HR) model. For the parametric region of tonic spiking oscillations, it is shown that random noise transforms the spiking dynamic regime into the bursting one. This stochastic phenomenon is specified by qualitative changes in distributions of random trajectories and interspike intervals (ISIs). For a quantitative analysis of the noise-induced bursting, we suggest a constructive semi-analytical approach based on the stochastic sensitivity function (SSF) technique and the method of confidence domains that allows us to describe geometrically a distribution of random states around the deterministic attractors. Using this approach, we develop a new algorithm for estimation of critical values for the noise intensity corresponding to the qualitative changes in stochastic dynamics. We show that the obtained estimations are in good agreement with the numerical results. An interplay between noise-induced bursting and transitions from order to chaos is discussed.

  13. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction.

    PubMed

    Miranian, A; Abdollahzade, M

    2013-02-01

    Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.

  14. Finding the Missing Physics: Simulating Polydisperse Polymer Melts

    NASA Astrophysics Data System (ADS)

    Rorrer, Nichoals; Dorgan, John

    2014-03-01

    A Monte Carlo algorithm has been developed to model polydisperse polymer melts. For the first time, this enables the specification of a predetermined molecular weight distribution for lattice based simulations. It is demonstrated how to map an arbitrary probability distributions onto a discrete number of chains residing on an fcc lattice. The resulting algorithm is able to simulate a wide variety of behaviors for polydisperse systems including confinement effects, shear flow, and parabolic flow. The dynamic version of the algorithm accurately captures Rouse dynamics for short polymer chains, and reptation-like dynamics for longer chain lengths.1 When polydispersity is introduced, smaller Rouse times and broadened the transition between different scaling regimes are observed. Rouse times also decrease under confinement for both polydisperse and monodisperse systems and chain length dependent migration effects are observed. The steady-state version of the algorithm enables the simulation of flow and when polydisperse systems are subject to parabolic (Poiseulle) flow, a migration phenomenon based on chain length is again present. These and other phenomena highlight the importance of including polydispersity in obtaining physically realistic simulations of polymeric melts. 1. Dorgan, J.R.; Rorrer, N.A.; Maupin, C.M., Macromolecules 2012, 45(21), 8833-8840. Work funded by the Fluid Dynamics program of the National Science Foundation under grant CBET-1067707.

  15. Learning Efficient Sparse and Low Rank Models.

    PubMed

    Sprechmann, P; Bronstein, A M; Sapiro, G

    2015-09-01

    Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.

  16. RHIC and LHC Phenomena with a Unified Parton Transport

    NASA Astrophysics Data System (ADS)

    Bouras, Ioannis; El, Andrej; Fochler, Oliver; Reining, Felix; Senzel, Florian; Uphoff, Jan; Wesp, Christian; Xu, Zhe; Greiner, Carsten

    We discuss recent applications of the partonic pQCD based cascade model BAMPS with focus on heavy-ion phenomeneology in hard and soft momentum range. The nuclear modification factor as well as elliptic flow are calculated in BAMPS for RHIC end LHC energies. These observables are also discussed within the same framework for charm and bottom quarks. Contributing to the recent jet-quenching investigations we present first preliminary results on application of jet reconstruction algorithms in BAMPS. Finally, collective effects induced by jets are investigated: we demonstrate the development of Mach cones in ideal matter as well in the highly viscous regime.

  17. RHIC and LHC phenomena with an unified parton transport

    NASA Astrophysics Data System (ADS)

    Bouras, Ioannis; El, Andrej; Fochler, Oliver; Reining, Felix; Senzel, Florian; Uphoff, Jan; Wesp, Christian; Xu, Zhe; Greiner, Carsten

    2012-11-01

    We discuss recent applications of the partonic pQCD based cascade model BAMPS with focus on heavy-ion phenomeneology in hard and soft momentum range. The nuclear modification factor as well as elliptic flow are calculated in BAMPS for RHIC end LHC energies. These observables are also discussed within the same framework for charm and bottom quarks. Contributing to the recent jet-quenching investigations we present first preliminary results on application of jet reconstruction algorithms in BAMPS. Finally, collective effects induced by jets are investigated: we demonstrate the development of Mach cones in ideal matter as well in the highly viscous regime.

  18. Small UAV Research and Evolution in Long Endurance Electric Powered Vehicles

    NASA Technical Reports Server (NTRS)

    Logan, Michael J.; Chu, Julio; Motter, Mark A.; Carter, Dennis L.; Ol, Michael; Zeune, Cale

    2007-01-01

    This paper describes recent research into the advancement of small, electric powered unmanned aerial vehicle (UAV) capabilities. Specifically, topics include the improvements made in battery technology, design methodologies, avionics architectures and algorithms, materials and structural concepts, propulsion system performance prediction, and others. The results of prototype vehicle designs and flight tests are discussed in the context of their usefulness in defining and validating progress in the various technology areas. Further areas of research need are also identified. These include the need for more robust operating regimes (wind, gust, etc.), and continued improvement in payload fraction vs. endurance.

  19. DynPeak: An Algorithm for Pulse Detection and Frequency Analysis in Hormonal Time Series

    PubMed Central

    Vidal, Alexandre; Zhang, Qinghua; Médigue, Claire; Fabre, Stéphane; Clément, Frédérique

    2012-01-01

    The endocrine control of the reproductive function is often studied from the analysis of luteinizing hormone (LH) pulsatile secretion by the pituitary gland. Whereas measurements in the cavernous sinus cumulate anatomical and technical difficulties, LH levels can be easily assessed from jugular blood. However, plasma levels result from a convolution process due to clearance effects when LH enters the general circulation. Simultaneous measurements comparing LH levels in the cavernous sinus and jugular blood have revealed clear differences in the pulse shape, the amplitude and the baseline. Besides, experimental sampling occurs at a relatively low frequency (typically every 10 min) with respect to LH highest frequency release (one pulse per hour) and the resulting LH measurements are noised by both experimental and assay errors. As a result, the pattern of plasma LH may be not so clearly pulsatile. Yet, reliable information on the InterPulse Intervals (IPI) is a prerequisite to study precisely the steroid feedback exerted on the pituitary level. Hence, there is a real need for robust IPI detection algorithms. In this article, we present an algorithm for the monitoring of LH pulse frequency, basing ourselves both on the available endocrinological knowledge on LH pulse (shape and duration with respect to the frequency regime) and synthetic LH data generated by a simple model. We make use of synthetic data to make clear some basic notions underlying our algorithmic choices. We focus on explaining how the process of sampling affects drastically the original pattern of secretion, and especially the amplitude of the detectable pulses. We then describe the algorithm in details and perform it on different sets of both synthetic and experimental LH time series. We further comment on how to diagnose possible outliers from the series of IPIs which is the main output of the algorithm. PMID:22802933

  20. AI techniques for optimizing multi-objective reservoir operation upon human and riverine ecosystem demands

    NASA Astrophysics Data System (ADS)

    Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.

    2015-11-01

    Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.

  1. Explore the impacts of river flow and quality on biodiversity for water resources management by AI techniques

    NASA Astrophysics Data System (ADS)

    Chang, Fi-John; Tsai Tsai, Wen-Ping; Chang, Li-Chiu

    2016-04-01

    Water resources development is very challenging in Taiwan due to her diverse geographic environment and climatic conditions. To pursue sustainable water resources development, rationality and integrity is essential for water resources planning. River water quality and flow regimes are closely related to each other and affect river ecosystems simultaneously. This study aims to explore the complex impacts of water quality and flow regimes on fish community in order to comprehend the situations of the eco-hydrological system in the Danshui River of northern Taiwan. To make an effective and comprehensive strategy for sustainable water resources management, this study first models fish diversity through implementing a hybrid artificial neural network (ANN) based on long-term observational heterogeneity data of water quality, stream flow and fish species in the river. Then we use stream flow to estimate the loss of dissolved oxygen based on back-propagation neural networks (BPNNs). Finally, the non-dominated sorting genetic algorithm II (NSGA-II) is established for river flow management over the Shihmen Reservoir which is the main reservoir in this study area. In addition to satisfying the water demands of human beings and ecosystems, we also consider water quality for river flow management. The ecosystem requirement takes the form of maximizing fish diversity, which can be estimated by the hybrid ANN. The human requirement is to provide a higher satisfaction degree of water supply while the water quality requirement is to reduce the loss of dissolved oxygen in the river among flow stations. The results demonstrate that the proposed methodology can offer diversified alternative strategies for reservoir operation and improve reservoir operation strategies for producing downstream flows that could better meet both human and ecosystem needs as well as maintain river water quality. Keywords: Artificial intelligence (AI), Artificial neural networks (ANNs), Non-dominated sorting genetic algorithm II (NSGA-II), Sustainable water resources management, Flow regime, River ecosystem.

  2. Determination of the optimal mesh parameters for Iguassu centrifuge flow and separation calculations

    NASA Astrophysics Data System (ADS)

    Romanihin, S. M.; Tronin, I. V.

    2016-09-01

    We present the method and the results of the determination for optimal computational mesh parameters for axisymmetric modeling of flow and separation in the Iguasu gas centrifuge. The aim of this work was to determine the mesh parameters which provide relatively low computational cost whithout loss of accuracy. We use direct search optimization algorithm to calculate optimal mesh parameters. Obtained parameters were tested by the calculation of the optimal working regime of the Iguasu GC. Separative power calculated using the optimal mesh parameters differs less than 0.5% from the result obtained on the detailed mesh. Presented method can be used to determine optimal mesh parameters of the Iguasu GC with different rotor speeds.

  3. Physics Based Model for Online Fault Detection in Autonomous Cryogenic Loading System

    NASA Technical Reports Server (NTRS)

    Kashani, Ali; Devine, Ekaterina Viktorovna P; Luchinsky, Dmitry Georgievich; Smelyanskiy, Vadim; Sass, Jared P.; Brown, Barbara L.; Patterson-Hine, Ann

    2013-01-01

    We report the progress in the development of the chilldown model for rapid cryogenic loading system developed at KSC. The nontrivial characteristic feature of the analyzed chilldown regime is its active control by dump valves. The two-phase flow model of the chilldown is approximated as one-dimensional homogeneous fluid flow with no slip condition for the interphase velocity. The model is built using commercial SINDAFLUINT software. The results of numerical predictions are in good agreement with the experimental time traces. The obtained results pave the way to the application of the SINDAFLUINT model as a verification tool for the design and algorithm development required for autonomous loading operation.

  4. Characterizing white matter tissue in large strain via asymmetric indentation and inverse finite element modeling.

    PubMed

    Feng, Yuan; Lee, Chung-Hao; Sun, Lining; Ji, Songbai; Zhao, Xuefeng

    2017-01-01

    Characterizing the mechanical properties of white matter is important to understand and model brain development and injury. With embedded aligned axonal fibers, white matter is typically modeled as a transversely isotropic material. However, most studies characterize the white matter tissue using models with a single anisotropic invariant or in a small-strain regime. In this study, we combined a single experimental procedure - asymmetric indentation - with inverse finite element (FE) modeling to estimate the nearly incompressible transversely isotropic material parameters of white matter. A minimal form comprising three parameters was employed to simulate indentation responses in the large-strain regime. The parameters were estimated using a global optimization procedure based on a genetic algorithm (GA). Experimental data from two indentation configurations of porcine white matter, parallel and perpendicular to the axonal fiber direction, were utilized to estimate model parameters. Results in this study confirmed a strong mechanical anisotropy of white matter in large strain. Further, our results suggested that both indentation configurations are needed to estimate the parameters with sufficient accuracy, and that the indenter-sample friction is important. Finally, we also showed that the estimated parameters were consistent with those previously obtained via a trial-and-error forward FE method in the small-strain regime. These findings are useful in modeling and parameterization of white matter, especially under large deformation, and demonstrate the potential of the proposed asymmetric indentation technique to characterize other soft biological tissues with transversely isotropic properties. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Aerosol retrieval experiments in the ESA Aerosol_cci project

    NASA Astrophysics Data System (ADS)

    Holzer-Popp, T.; de Leeuw, G.; Griesfeller, J.; Martynenko, D.; Klüser, L.; Bevan, S.; Davies, W.; Ducos, F.; Deuzé, J. L.; Graigner, R. G.; Heckel, A.; von Hoyningen-Hüne, W.; Kolmonen, P.; Litvinov, P.; North, P.; Poulsen, C. A.; Ramon, D.; Siddans, R.; Sogacheva, L.; Tanre, D.; Thomas, G. E.; Vountas, M.; Descloitres, J.; Griesfeller, J.; Kinne, S.; Schulz, M.; Pinnock, S.

    2013-08-01

    Within the ESA Climate Change Initiative (CCI) project Aerosol_cci (2010-2013), algorithms for the production of long-term total column aerosol optical depth (AOD) datasets from European Earth Observation sensors are developed. Starting with eight existing pre-cursor algorithms three analysis steps are conducted to improve and qualify the algorithms: (1) a series of experiments applied to one month of global data to understand several major sensitivities to assumptions needed due to the ill-posed nature of the underlying inversion problem, (2) a round robin exercise of "best" versions of each of these algorithms (defined using the step 1 outcome) applied to four months of global data to identify mature algorithms, and (3) a comprehensive validation exercise applied to one complete year of global data produced by the algorithms selected as mature based on the round robin exercise. The algorithms tested included four using AATSR, three using MERIS and one using PARASOL. This paper summarizes the first step. Three experiments were conducted to assess the potential impact of major assumptions in the various aerosol retrieval algorithms. In the first experiment a common set of four aerosol components was used to provide all algorithms with the same assumptions. The second experiment introduced an aerosol property climatology, derived from a combination of model and sun photometer observations, as a priori information in the retrievals on the occurrence of the common aerosol components. The third experiment assessed the impact of using a common nadir cloud mask for AATSR and MERIS algorithms in order to characterize the sensitivity to remaining cloud contamination in the retrievals against the baseline dataset versions. The impact of the algorithm changes was assessed for one month (September 2008) of data: qualitatively by inspection of monthly mean AOD maps and quantitatively by comparing daily gridded satellite data against daily averaged AERONET sun photometer observations for the different versions of each algorithm globally (land and coastal) and for three regions with different aerosol regimes. The analysis allowed for an assessment of sensitivities of all algorithms, which helped define the best algorithm versions for the subsequent round robin exercise; all algorithms (except for MERIS) showed some, in parts significant, improvement. In particular, using common aerosol components and partly also a priori aerosol-type climatology is beneficial. On the other hand the use of an AATSR-based common cloud mask meant a clear improvement (though with significant reduction of coverage) for the MERIS standard product, but not for the algorithms using AATSR. It is noted that all these observations are mostly consistent for all five analyses (global land, global coastal, three regional), which can be understood well, since the set of aerosol components defined in Sect. 3.1 was explicitly designed to cover different global aerosol regimes (with low and high absorption fine mode, sea salt and dust).

  6. Attosecond Streaking in the Water Window: A New Regime of Attosecond Pulse Characterization

    NASA Astrophysics Data System (ADS)

    Cousin, Seth L.; Di Palo, Nicola; Buades, Bárbara; Teichmann, Stephan M.; Reduzzi, M.; Devetta, M.; Kheifets, A.; Sansone, G.; Biegert, Jens

    2017-10-01

    We report on the first streaking measurement of water-window attosecond pulses generated via high-harmonic generation, driven by sub-2-cycle, carrier-to-envelope-phase-stable, 1850-nm laser pulses. Both the central photon energy and the energy bandwidth far exceed what has been demonstrated thus far, warranting the investigation of the attosecond streaking technique for the soft-x-ray regime and the limits of the frogcrab retrieval algorithm under such conditions. We also discuss the problem of attochirp compensation and issues regarding much lower photoionization cross sections compared with the extreme ultraviolet in addition to the fact that several shells of target gases are accessed simultaneously. Based on our investigation, we caution that the vastly different conditions in the soft-x-ray regime warrant a diligent examination of the fidelity of the measurement and the retrieval procedure.

  7. Defining pyromes and global syndromes of fire regimes.

    PubMed

    Archibald, Sally; Lehmann, Caroline E R; Gómez-Dans, Jose L; Bradstock, Ross A

    2013-04-16

    Fire is a ubiquitous component of the Earth system that is poorly understood. To date, a global-scale understanding of fire is largely limited to the annual extent of burning as detected by satellites. This is problematic because fire is multidimensional, and focus on a single metric belies its complexity and importance within the Earth system. To address this, we identified five key characteristics of fire regimes--size, frequency, intensity, season, and extent--and combined new and existing global datasets to represent each. We assessed how these global fire regime characteristics are related to patterns of climate, vegetation (biomes), and human activity. Cross-correlations demonstrate that only certain combinations of fire characteristics are possible, reflecting fundamental constraints in the types of fire regimes that can exist. A Bayesian clustering algorithm identified five global syndromes of fire regimes, or pyromes. Four pyromes represent distinctions between crown, litter, and grass-fueled fires, and the relationship of these to biomes and climate are not deterministic. Pyromes were partially discriminated on the basis of available moisture and rainfall seasonality. Human impacts also affected pyromes and are globally apparent as the driver of a fifth and unique pyrome that represents human-engineered modifications to fire characteristics. Differing biomes and climates may be represented within the same pyrome, implying that pathways of change in future fire regimes in response to changes in climate and human activity may be difficult to predict.

  8. Improved Spectral Calculations for Discrete Schrődinger Operators

    NASA Astrophysics Data System (ADS)

    Puelz, Charles

    This work details an O(n2) algorithm for computing spectra of discrete Schrődinger operators with periodic potentials. Spectra of these objects enhance our understanding of fundamental aperiodic physical systems and contain rich theoretical structure of interest to the mathematical community. Previous work on the Harper model led to an O(n2) algorithm relying on properties not satisfied by other aperiodic operators. Physicists working with the Fibonacci Hamiltonian, a popular quasicrystal model, have instead used a problematic dynamical map approach or a sluggish O(n3) procedure for their calculations. The algorithm presented in this work, a blend of well-established eigenvalue/vector algorithms, provides researchers with a more robust computational tool of general utility. Application to the Fibonacci Hamiltonian in the sparsely studied intermediate coupling regime reveals structure in canonical coverings of the spectrum that will prove useful in motivating conjectures regarding band combinatorics and fractal dimensions.

  9. Solution of the hydrodynamic device model using high-order non-oscillatory shock capturing algorithms

    NASA Technical Reports Server (NTRS)

    Fatemi, Emad; Jerome, Joseph; Osher, Stanley

    1989-01-01

    A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially non-oscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.

  10. The kinetic regime of the Vicsek model

    NASA Astrophysics Data System (ADS)

    Chepizhko, A. A.; Kulinskii, V. L.

    2009-12-01

    We consider the dynamics of the system of self-propelling particles modeled via the Vicsek algorithm in continuum time limit. It is shown that the alignment process for the velocities can be subdivided into two regimes: "fast" kinetic and "slow" hydrodynamic ones. In fast kinetic regime the alignment of the particle velocity to the local neighborhood takes place with characteristic relaxation time. So, that the bigger regions arise with the velocity alignment. These regions align their velocities thus giving rise to hydrodynamic regime of the dynamics. We propose the mean-field-like approach in which we take into account the correlations between density and velocity. The comparison of the theoretical predictions with the numerical simulations is given. The relation between Vicsek model in the zero velocity limit and the Kuramoto model is stated. The mean-field approach accounting for the dynamic change of the neighborhood is proposed. The nature of the discontinuity of the dependence of the order parameter in case of vectorial noise revealed in Gregorie and Chaite, Phys. Rev. Lett., 92, 025702 (2004) is discussed and the explanation of it is proposed.

  11. Use of artificial landscapes to isolate controls on burn probability

    Treesearch

    Marc-Andre Parisien; Carol Miller; Alan A. Ager; Mark A. Finney

    2010-01-01

    Techniques for modeling burn probability (BP) combine the stochastic components of fire regimes (ignitions and weather) with sophisticated fire growth algorithms to produce high-resolution spatial estimates of the relative likelihood of burning. Despite the numerous investigations of fire patterns from either observed or simulated sources, the specific influence of...

  12. Systemic risk and spatiotemporal dynamics of the US housing market

    PubMed Central

    Meng, Hao; Xie, Wen-Jie; Jiang, Zhi-Qiang; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene

    2014-01-01

    Housing markets play a crucial role in economies and the collapse of a real-estate bubble usually destabilizes the financial system and causes economic recessions. We investigate the systemic risk and spatiotemporal dynamics of the US housing market (1975–2011) at the state level based on the Random Matrix Theory (RMT). We identify richer economic information in the largest eigenvalues deviating from RMT predictions for the housing market than for stock markets and find that the component signs of the eigenvectors contain either geographical information or the extent of differences in house price growth rates or both. By looking at the evolution of different quantities such as eigenvalues and eigenvectors, we find that the US housing market experienced six different regimes, which is consistent with the evolution of state clusters identified by the box clustering algorithm and the consensus clustering algorithm on the partial correlation matrices. We find that dramatic increases in the systemic risk are usually accompanied by regime shifts, which provide a means of early detection of housing bubbles. PMID:24413626

  13. Some issues in uncertainty quantification and parameter tuning: a case study of convective parameterization scheme in the WRF regional climate model

    NASA Astrophysics Data System (ADS)

    Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.

    2011-12-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  14. Uncertainty Quantification and Parameter Tuning: A Case Study of Convective Parameterization Scheme in the WRF Regional Climate Model

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Yang, B.; Lin, G.; Leung, R.; Zhang, Y.

    2012-04-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. The latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  15. Some issues in uncertainty quantification and parameter tuning: a case study of convective parameterization scheme in the WRF regional climate model

    NASA Astrophysics Data System (ADS)

    Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.

    2012-03-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic importance sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e. the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  16. Population annealing simulations of a binary hard-sphere mixture

    NASA Astrophysics Data System (ADS)

    Callaham, Jared; Machta, Jonathan

    2017-06-01

    Population annealing is a sequential Monte Carlo scheme well suited to simulating equilibrium states of systems with rough free energy landscapes. Here we use population annealing to study a binary mixture of hard spheres. Population annealing is a parallel version of simulated annealing with an extra resampling step that ensures that a population of replicas of the system represents the equilibrium ensemble at every packing fraction in an annealing schedule. The algorithm and its equilibration properties are described, and results are presented for a glass-forming fluid composed of a 50/50 mixture of hard spheres with diameter ratio of 1.4:1. For this system, we obtain precise results for the equation of state in the glassy regime up to packing fractions φ ≈0.60 and study deviations from the Boublik-Mansoori-Carnahan-Starling-Leland equation of state. For higher packing fractions, the algorithm falls out of equilibrium and a free volume fit predicts jamming at packing fraction φ ≈0.667 . We conclude that population annealing is an effective tool for studying equilibrium glassy fluids and the jamming transition.

  17. The link between eddy-driven jet variability and weather regimes in the North Atlantic-European sector

    NASA Astrophysics Data System (ADS)

    Madonna, E.; Li, C.; Grams, C. M.; Woollings, T.

    2017-12-01

    Understanding the variability of the North Atlantic eddy-driven jet is key to unravelling the dynamics, predictability and climate change response of extratropical weather in the region. This study aims to 1) reconcile two perspectives on wintertime variability in the North Atlantic-European sector and 2) clarify their link to atmospheric blocking. Two common views of wintertime variability in the North Atlantic are the zonal-mean framework comprising three preferred locations of the eddy-driven jet (southern, central, northern), and the weather regime framework comprising four classical North Atlantic-European regimes (Atlantic ridge AR, zonal ZO, European/Scandinavian blocking BL, Greenland anticyclone GA). We use a k-means clustering algorithm to characterize the two-dimensional variability of the eddy-driven jet stream, defined by the lower tropospheric zonal wind in the ERA-Interim reanalysis. The first three clusters capture the central jet and northern jet, along with a new mixed jet configuration; a fourth cluster is needed to recover the southern jet. The mixed cluster represents a split or strongly tilted jet, neither of which is well described in the zonal-mean framework, and has a persistence of about one week, similar to the other clusters. Connections between the preferred jet locations and weather regimes are corroborated - southern to GA, central to ZO, and northern to AR. In addition, the new mixed cluster is found to be linked to European/Scandinavian blocking, whose relation to the eddy-driven jet was previously unclear. The results highlight the necessity of bridging from weather to climate scales for a deeper understanding of atmospheric circulation variability.

  18. Large eddy simulation of bluff body stabilized premixed and partially premixed combustion

    NASA Astrophysics Data System (ADS)

    Porumbel, Ionut

    Large Eddy Simulation (LES) of bluff body stabilized premixed and partially premixed combustion close to the flammability limit is carried out in this thesis. The main goal of the thesis is the study of the equivalence ratio effect on flame stability and dynamics in premixed and partially premixed flames. An LES numerical algorithm able to handle the entire range of combustion regimes and equivalence ratios is developed for this purpose. The algorithm has no ad-hoc adjustable model parameters and is able to respond automatically to variations in the inflow conditions, without user intervention. Algorithm validation is achieved by conducting LES of reactive and non-reactive flow. Comparison with experimental data shows good agreement for both mean and unsteady flow properties. In the reactive flow, two scalar closure models, Eddy Break-Up (EBULES) and Linear Eddy Mixing (LEMLES), are used and compared. Over important regions, the flame lies in the Broken Reaction Zone regime. Here, the EBU model assumptions fail. In LEMLES, the reaction-diffusion equation is not filtered, but resolved on a linear domain and the model maintains validity. The flame thickness predicted by LEMLES is smaller and the flame is faster to respond to turbulent fluctuations, resulting in a more significant wrinkling of the flame surface when compared to EBULES. As a result, LEMLES captures better the subtle effects of the flame-turbulence interaction, the flame structure shows higher complexity, and the far field spreading of the wake is closer to the experimental observations. Three premixed (φ = 0.6, 0.65, and 0.75) cases are simulated. As expected, for the leaner case (φ = 0.6) the flame temperature is lower, the heat release is reduced and vorticity is stronger. As a result, the flame in this case is found to be unstable. In the rich case (φ = 0.75), the flame temperature is higher, and the spreading rate of the wake is increased due to the higher amount of heat release. The ignition delay in the lean case (φ = 0.6) is larger when compared to the rich case (φ = 0.75), in correlation with the instantaneous flame stretch. Partially premixed combustion is simulated for cases where the transverse profile of the inflow equivalence ratio is variable. The simulations show that for mixtures leaner in the core the vortical pattern tends towards anti-symmetry and the heat release decreases, resulting also in instability of the flame. For mixtures richer in the core, the flame displays sinusoidal flapping that results in larger wake spreading. The numerical simulations presented in this study employed simple, one-step chemical mechanisms. More accurate predictions of flame stability will require the use of detailed chemistry, raising the computational cost of the simulation. To address this issue, a novel algorithm for training Artificial Neural Networks (ANN) for prediction of the chemical source terms has been implemented and tested. Compared to earlier methods, such as reaction rate tabulation, the main advantages of the ANN method are in CPU time and disk space and memory reduction. The results of the testing indicate reasonable algorithm accuracy although some regions of the flame exhibit relatively significant differences compared to direct integration.

  19. Numerical Simulation of 3-D Supersonic Viscous Flow in an Experimental MHD Channel

    NASA Technical Reports Server (NTRS)

    Kato, Hiromasa; Tannehill, John C.; Gupta, Sumeet; Mehta, Unmeel B.

    2004-01-01

    The 3-D supersonic viscous flow in an experimental MHD channel has been numerically simulated. The experimental MHD channel is currently in operation at NASA Ames Research Center. The channel contains a nozzle section, a center section, and an accelerator section where magnetic and electric fields can be imposed on the flow. In recent tests, velocity increases of up to 40% have been achieved in the accelerator section. The flow in the channel is numerically computed using a new 3-D parabolized Navier-Stokes (PNS) algorithm that has been developed to efficiently compute MHD flows in the low magnetic Reynolds number regime. The MHD effects are modeled by introducing source terms into the PNS equations which can then be solved in a very e5uent manner. To account for upstream (elliptic) effects, the flowfield can be computed using multiple streamwise sweeps with an iterated PNS algorithm. The new algorithm has been used to compute two test cases that match the experimental conditions. In both cases, magnetic and electric fields are applied to the flow. The computed results are in good agreement with the available experimental data.

  20. Modeling the Gross-Pitaevskii Equation Using the Quantum Lattice Gas Method

    NASA Astrophysics Data System (ADS)

    Oganesov, Armen

    We present an improved Quantum Lattice Gas (QLG) algorithm as a mesoscopic unitary perturbative representation of the mean field Gross Pitaevskii (GP) equation for Bose-Einstein Condensates (BECs). The method employs an interleaved sequence of unitary collide and stream operators. QLG is applicable to many different scalar potentials in the weak interaction regime and has been used to model the Korteweg-de Vries (KdV), Burgers and GP equations. It can be implemented on both quantum and classical computers and is extremely scalable. We present results for 1D soliton solutions with positive and negative internal interactions, as well as vector solitons with inelastic scattering. In higher dimensions we look at the behavior of vortex ring reconnection. A further improvement is considered with a proper operator splitting technique via a Fourier transformation. This is great for quantum computers since the quantum FFT is exponentially faster than its classical counterpart which involves non-local data on the entire lattice (Quantum FFT is the backbone of the Shor algorithm for quantum factorization). We also present an imaginary time method in which we transform the Schrodinger equation into a diffusion equation for recovering ground state initial conditions of a quantum system suitable for the QLG algorithm.

  1. Laser Frequency Noise in Coherent Optical Systems: Spectral Regimes and Impairments.

    PubMed

    Kakkar, Aditya; Rodrigo Navarro, Jaime; Schatz, Richard; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Louchet, Hadrien; Popov, Sergei; Jacobsen, Gunnar

    2017-04-12

    Coherent communication networks are based on the ability to use multiple dimensions of the lightwave together with electrical domain compensation of transmission impairments. Electrical-domain dispersion compensation (EDC) provides many advantages such as network flexibility and enhanced fiber nonlinearity tolerance, but makes the system more susceptible to laser frequency noise (FN), e.g. to the local oscillator FN in systems with post-reception EDC. Although this problem has been extensively studied, statistically, for links assuming lasers with white-FN, many questions remain unanswered. Particularly, the influence of a realistic non-white FN-spectrum due to e.g., the presence of 1/f-flicker and carrier induced noise remains elusive and a statistical analysis becomes insufficient. Here we provide an experimentally validated theory for coherent optical links with lasers having general non-white FN-spectrum and EDC. The fundamental reason of the increased susceptibility is shown to be FN-induced symbol displacement that causes timing jitter and/or inter/intra symbol interference. We establish that different regimes of the laser FN-spectrum cause a different set of impairments. The influence of the impairments due to some regimes can be reduced by optimizing the corresponding mitigation algorithms, while other regimes cause irretrievable impairments. Theoretical boundaries of these regimes and corresponding criteria applicable to system/laser design are provided.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malone, Fionn D., E-mail: f.malone13@imperial.ac.uk; Lee, D. K. K.; Foulkes, W. M. C.

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing ourmore » results to previous work where possible.« less

  3. Time-resolved coherent X-ray diffraction imaging of surface acoustic waves

    PubMed Central

    Nicolas, Jan-David; Reusch, Tobias; Osterhoff, Markus; Sprung, Michael; Schülein, Florian J. R.; Krenner, Hubert J.; Wixforth, Achim; Salditt, Tim

    2014-01-01

    Time-resolved coherent X-ray diffraction experiments of standing surface acoustic waves, illuminated under grazing incidence by a nanofocused synchrotron beam, are reported. The data have been recorded in stroboscopic mode at controlled and varied phase between the acoustic frequency generator and the synchrotron bunch train. At each time delay (phase angle), the coherent far-field diffraction pattern in the small-angle regime is inverted by an iterative algorithm to yield the local instantaneous surface height profile along the optical axis. The results show that periodic nanoscale dynamics can be imaged at high temporal resolution in the range of 50 ps (pulse length). PMID:25294979

  4. Time-resolved coherent X-ray diffraction imaging of surface acoustic waves.

    PubMed

    Nicolas, Jan-David; Reusch, Tobias; Osterhoff, Markus; Sprung, Michael; Schülein, Florian J R; Krenner, Hubert J; Wixforth, Achim; Salditt, Tim

    2014-10-01

    Time-resolved coherent X-ray diffraction experiments of standing surface acoustic waves, illuminated under grazing incidence by a nanofocused synchrotron beam, are reported. The data have been recorded in stroboscopic mode at controlled and varied phase between the acoustic frequency generator and the synchrotron bunch train. At each time delay (phase angle), the coherent far-field diffraction pattern in the small-angle regime is inverted by an iterative algorithm to yield the local instantaneous surface height profile along the optical axis. The results show that periodic nanoscale dynamics can be imaged at high temporal resolution in the range of 50 ps (pulse length).

  5. Wavelet Monte Carlo dynamics: A new algorithm for simulating the hydrodynamics of interacting Brownian particles

    NASA Astrophysics Data System (ADS)

    Dyer, Oliver T.; Ball, Robin C.

    2017-03-01

    We develop a new algorithm for the Brownian dynamics of soft matter systems that evolves time by spatially correlated Monte Carlo moves. The algorithm uses vector wavelets as its basic moves and produces hydrodynamics in the low Reynolds number regime propagated according to the Oseen tensor. When small moves are removed, the correlations closely approximate the Rotne-Prager tensor, itself widely used to correct for deficiencies in Oseen. We also include plane wave moves to provide the longest range correlations, which we detail for both infinite and periodic systems. The computational cost of the algorithm scales competitively with the number of particles simulated, N, scaling as N In N in homogeneous systems and as N in dilute systems. In comparisons to established lattice Boltzmann and Brownian dynamics algorithms, the wavelet method was found to be only a factor of order 1 times more expensive than the cheaper lattice Boltzmann algorithm in marginally semi-dilute simulations, while it is significantly faster than both algorithms at large N in dilute simulations. We also validate the algorithm by checking that it reproduces the correct dynamics and equilibrium properties of simple single polymer systems, as well as verifying the effect of periodicity on the mobility tensor.

  6. Robust Optimization Design Algorithm for High-Frequency TWTs

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.; Chevalier, Christine T.

    2010-01-01

    Traveling-wave tubes (TWTs), such as the Ka-band (26-GHz) model recently developed for the Lunar Reconnaissance Orbiter, are essential as communication amplifiers in spacecraft for virtually all near- and deep-space missions. This innovation is a computational design algorithm that, for the first time, optimizes the efficiency and output power of a TWT while taking into account the effects of dimensional tolerance variations. Because they are primary power consumers and power generation is very expensive in space, much effort has been exerted over the last 30 years to increase the power efficiency of TWTs. However, at frequencies higher than about 60 GHz, efficiencies of TWTs are still quite low. A major reason is that at higher frequencies, dimensional tolerance variations from conventional micromachining techniques become relatively large with respect to the circuit dimensions. When this is the case, conventional design- optimization procedures, which ignore dimensional variations, provide inaccurate designs for which the actual amplifier performance substantially under-performs that of the design. Thus, this new, robust TWT optimization design algorithm was created to take account of and ameliorate the deleterious effects of dimensional variations and to increase efficiency, power, and yield of high-frequency TWTs. This design algorithm can help extend the use of TWTs into the terahertz frequency regime of 300-3000 GHz. Currently, these frequencies are under-utilized because of the lack of efficient amplifiers, thus this regime is known as the "terahertz gap." The development of an efficient terahertz TWT amplifier could enable breakthrough applications in space science molecular spectroscopy, remote sensing, nondestructive testing, high-resolution "through-the-wall" imaging, biomedical imaging, and detection of explosives and toxic biochemical agents.

  7. The AIROPA software package: milestones for testing general relativity in the strong gravity regime with AO

    NASA Astrophysics Data System (ADS)

    Witzel, Gunther; Lu, Jessica R.; Ghez, Andrea M.; Martinez, Gregory D.; Fitzgerald, Michael P.; Britton, Matthew; Sitarski, Breann N.; Do, Tuan; Campbell, Randall D.; Service, Maxwell; Matthews, Keith; Morris, Mark R.; Becklin, E. E.; Wizinowich, Peter L.; Ragland, Sam; Doppmann, Greg; Neyman, Chris; Lyke, James; Kassis, Marc; Rizzi, Luca; Lilley, Scott; Rampy, Rachel

    2016-07-01

    General relativity can be tested in the strong gravity regime by monitoring stars orbiting the supermassive black hole at the Galactic Center with adaptive optics. However, the limiting source of uncertainty is the spatial PSF variability due to atmospheric anisoplanatism and instrumental aberrations. The Galactic Center Group at UCLA has completed a project developing algorithms to predict PSF variability for Keck AO images. We have created a new software package (AIROPA), based on modified versions of StarFinder and Arroyo, that takes atmospheric turbulence profiles, instrumental aberration maps, and images as inputs and delivers improved photometry and astrometry on crowded fields. This software package will be made publicly available soon.

  8. Physics based model for online fault detection in autonomous cryogenic loading system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashani, Ali; Ponizhovskaya, Ekaterina; Luchinsky, Dmitry

    2014-01-29

    We report the progress in the development of the chilldown model for a rapid cryogenic loading system developed at NASA-Kennedy Space Center. The nontrivial characteristic feature of the analyzed chilldown regime is its active control by dump valves. The two-phase flow model of the chilldown is approximated as one-dimensional homogeneous fluid flow with no slip condition for the interphase velocity. The model is built using commercial SINDA/FLUINT software. The results of numerical predictions are in good agreement with the experimental time traces. The obtained results pave the way to the application of the SINDA/FLUINT model as a verification tool formore » the design and algorithm development required for autonomous loading operation.« less

  9. Direct Simulation of Friction Forces for Heavy Ions Interacting with a Warm Magnetized Electron Distribution

    NASA Astrophysics Data System (ADS)

    Bruhwiler, D. L.; Busby, R.; Fedotov, A. V.; Ben-Zvi, I.; Cary, J. R.; Stoltz, P.; Burov, A.; Litvinenko, V. N.; Messmer, P.; Abell, D.; Nieter, C.

    2005-06-01

    A proposed luminosity upgrade to RHIC includes a novel electron cooling section, which would use ˜55 MeV electrons to cool fully-ionized 100 GeV/nucleon gold ions. High-current bunched electron beams are required for the RHIC cooler, resulting in very high transverse temperatures and relatively low values for the magnetized cooling logarithm. The accuracy of analytical formulae in this regime requires careful examination. Simulations of the friction coefficient, using the VORPAL code, for single gold ions passing once through the interaction region, are compared with theoretical calculations. Charged particles are advanced using a fourth-order Hermite predictor-corrector algorithm. The fields in the beam frame are obtained from direct calculation of Coulomb's law, which is more efficient than multipole-type algorithms for less than ˜106 particles. Because the interaction time is so short, it is necessary to suppress the diffusive aspect of the ion dynamics through the careful use of positrons in the simulations.

  10. Simultaneous source and attenuation reconstruction in SPECT using ballistic and single scattering data

    NASA Astrophysics Data System (ADS)

    Courdurier, M.; Monard, F.; Osses, A.; Romero, F.

    2015-09-01

    In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.

  11. Optimization of multi-stage dynamic treatment regimes utilizing accumulated data.

    PubMed

    Huang, Xuelin; Choi, Sangbum; Wang, Lu; Thall, Peter F

    2015-11-20

    In medical therapies involving multiple stages, a physician's choice of a subject's treatment at each stage depends on the subject's history of previous treatments and outcomes. The sequence of decisions is known as a dynamic treatment regime or treatment policy. We consider dynamic treatment regimes in settings where each subject's final outcome can be defined as the sum of longitudinally observed values, each corresponding to a stage of the regime. Q-learning, which is a backward induction method, is used to first optimize the last stage treatment then sequentially optimize each previous stage treatment until the first stage treatment is optimized. During this process, model-based expectations of outcomes of late stages are used in the optimization of earlier stages. When the outcome models are misspecified, bias can accumulate from stage to stage and become severe, especially when the number of treatment stages is large. We demonstrate that a modification of standard Q-learning can help reduce the accumulated bias. We provide a computational algorithm, estimators, and closed-form variance formulas. Simulation studies show that the modified Q-learning method has a higher probability of identifying the optimal treatment regime even in settings with misspecified models for outcomes. It is applied to identify optimal treatment regimes in a study for advanced prostate cancer and to estimate and compare the final mean rewards of all the possible discrete two-stage treatment sequences. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Estimates of phytoplankton class-specific and total primary production in the Mediterranean Sea from satellite ocean color observations

    NASA Astrophysics Data System (ADS)

    Uitz, Julia; Stramski, Dariusz; Gentili, Bernard; D'Ortenzio, Fabrizio; Claustre, Hervé

    2012-06-01

    An approach that combines a recently developed procedure for improved estimation of surface chlorophyll a concentration (Chlsurf) from ocean color and a phytoplankton class-specific bio-optical model was used to examine primary production in the Mediterranean Sea. Specifically, this approach was applied to the 10 year time series of satellite Chlsurfdata from the Sea-viewing Wide Field-of-view Sensor. We estimated the primary production associated with three major phytoplankton classes (micro, nano, and picophytoplankton), which also yielded new estimates of the total primary production (Ptot). These estimates of Ptot (e.g., 68 g C m-2 yr-1for the entire Mediterranean basin) are lower by a factor of ˜2 and show a different seasonal cycle when compared with results from conventional approaches based on standard ocean color chlorophyll algorithm and a non-class-specific primary production model. Nanophytoplankton are found to be dominant contributors to Ptot (43-50%) throughout the year and entire basin. Micro and picophytoplankton exhibit variable contributions to Ptot depending on the season and ecological regime. In the most oligotrophic regime, these contributions are relatively stable all year long with picophytoplankton (˜32%) playing a larger role than microphytoplankton (˜22%). In the blooming regime, picophytoplankton dominate over microphytoplankton most of the year, except during the spring bloom when microphytoplankton (27-38%) are considerably more important than picophytoplankton (20-27%).

  13. Determination of the Spectral Index in the Fission Spectrum Energy Regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Amy Sarah

    2016-05-16

    Neutron reaction cross sections play a vital role in tracking the production and destruction of isotopes exposed to neutron fluence. They are central to the process of reconciling the initial and final atom inventories. Measurements of irradiated samples by radiochemical methods in tangent with an algorithm are used to evaluate the fluence a sample is exposed to over the course of the irradiation. This algorithm is the Isotope Production Code (IPC) created and used by the radiochemistry data assessment team at Los Alamos National Laboratory (LANL). An integral result is calculated by varying the total neutron fluence seen by amore » sample. A sample, irradiated in a critical assembly, will be exposed to a unique neutron flux defined by the neutron source and distance of the sample from the source. Neutron cross sections utilized are a function of the hardness of the neutron spectrum at the location of irradiation. A spectral index is used an indicator of the hardness of the neutron spectrum. Cross sections fit forms applied in IPC are collapsed from a LANL 30-group energy structure. Several decades of research and development have been performed to formalize the current IPC cross section library. Basis of the current fission spectrum neutron reaction cross section library is rooted in critical assembly experiments performed from the 1950’s through the early 1970’s at LANL. The focus of this report is development of the spectral index used an indicator of the hardness of the neutron spectrum in the fission spectrum energy regime.« less

  14. Tokamak Operation with Safety Factor q 95 < 2 via Control of MHD Stability

    DOE PAGES

    Piovesan, Paolo; Hanson, Jeremy M.; Martin, Piero; ...

    2014-07-24

    Magnetic feedback control of the resistive-wall mode has enabled DIII-D to access stable operation at safety factor q95 = 1:9 in divertor plasmas for 150 instability growth times. Magnetohydrodynamic stability sets a hard, disruptive limit on the minimum edge safety factor achievable in a tokamak, or on the maximum plasma current at given toroidal magnetic eld. In tokamaks with a divertor, the limit occurs at q95 = 2, as con rmed in DIII-D. Since the energy con cement time scales linearly with current, this also bounds the performance of a fusion reactor. DIII-D has overcome this limit, opening a wholemore » new high-current regime not accessible before. This result brings signi cant possible bene ts in terms of fusion performance, but it also extends resistive wall mode physics and its control to conditions never explored before. In present experiments, q95 < 2 operation is eventually halted by voltage limits reached in the feedback power supplies, not by intrinsic physics issues. Improvements to power supplies and to control algorithms have the potential to further extend this regime.« less

  15. Plasmonic enhanced terahertz time-domain spectroscopy system for identification of common explosives

    NASA Astrophysics Data System (ADS)

    Demiraǧ, Yiǧit; Bütün, Bayram; Özbay, Ekmel

    2017-05-01

    In this study, we present a classification algorithm for terahertz time-domain spectroscopy systems (THz-TDS) that can be trained to identify most commonly used explosives (C4, HMX, RDX, PETN, TNT, composition-B and blackpowder) and some non-explosive samples (lactose, sucrose, PABA). Our procedure can be used in any THz-TDS system that detects either transmission or reflection spectra at room conditions. After preprocessing the signal in low THz regime (0.1 - 3 THz), our algorithm takes advantages of a latent space transformation based on principle component analysis in order to classify explosives with low false alarm rate.

  16. Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold

    NASA Astrophysics Data System (ADS)

    Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph

    2018-05-01

    In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.

  17. Determination of photophysical parameters of chlorophyll {alpha} in photosynthetic organisms using the method of nonlinear laser fluorimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gostev, T S; Fadeev, V V

    2011-05-31

    We study the possibility of solving the multiparameter inverse problem of nonlinear laser fluorimetry of molecular systems with high local concentration of fluorophores (by the example of chlorophyll {alpha} molecules in photosynthetic organisms). The algorithms are proposed that allow determination of up to four photophysical parameters of chlorophyll {alpha} from the experimental fluorescence saturation curves. The uniqueness and stability of the inverse problem solution obtained using the proposed algorithms were assessed numerically. The laser spectrometer, designed in the course of carrying out the work and aimed at nonlinear laser fluorimetry in the quasi-stationary and nonstationary excitation regimes is described. Themore » algorithms, proposed in this paper, are tested on pure cultures of microalgae Chlorella pyrenoidosa and Chlamydomonas reinhardtii under different functional conditions. (optical technologies in biophysics and medicine)« less

  18. Relativistic extension of a charge-conservative finite element solver for time-dependent Maxwell-Vlasov equations

    NASA Astrophysics Data System (ADS)

    Na, D.-Y.; Moon, H.; Omelchenko, Y. A.; Teixeira, F. L.

    2018-01-01

    Accurate modeling of relativistic particle motion is essential for physical predictions in many problems involving vacuum electronic devices, particle accelerators, and relativistic plasmas. A local, explicit, and charge-conserving finite-element time-domain (FETD) particle-in-cell (PIC) algorithm for time-dependent (non-relativistic) Maxwell-Vlasov equations on irregular (unstructured) meshes was recently developed by Moon et al. [Comput. Phys. Commun. 194, 43 (2015); IEEE Trans. Plasma Sci. 44, 1353 (2016)]. Here, we extend this FETD-PIC algorithm to the relativistic regime by implementing and comparing three relativistic particle-pushers: (relativistic) Boris, Vay, and Higuera-Cary. We illustrate the application of the proposed relativistic FETD-PIC algorithm for the analysis of particle cyclotron motion at relativistic speeds, harmonic particle oscillation in the Lorentz-boosted frame, and relativistic Bernstein modes in magnetized charge-neutral (pair) plasmas.

  19. Solution of the hydrodynamic device model using high-order non-oscillatory shock capturing algorithms. [for junction diodes simulation

    NASA Technical Reports Server (NTRS)

    Fatemi, Emad; Osher, Stanley; Jerome, Joseph

    1991-01-01

    A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially nonoscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.

  20. Dynamic Regimes of El Niño Southern Oscillation and Influenza Pandemic Timing

    PubMed Central

    Oluwole, Olusegun Steven Ayodele

    2017-01-01

    El Niño southern oscillation (ENSO) dynamics has been shown to drive seasonal influenza dynamics. Severe seasonal influenza epidemics and the 2009–2010 pandemic were coincident with chaotic regime of ENSO dynamics. ENSO dynamics from 1876 to 2016 were characterized to determine if influenza pandemics are coupled to chaotic regimes. Time-varying spectra of southern oscillation index (SOI) and sea surface temperature (SST) were compared. SOI and SST were decomposed to components using the algorithm of noise-assisted multivariate empirical mode decomposition. The components were Hilbert transformed to generate instantaneous amplitudes and phases. The trajectories and attractors of components were characterized in polar coordinates and state space. Influenza pandemics were mapped to dynamic regimes of SOI and SST joint recurrence of annual components. State space geometry of El Niños lagged by influenza pandemics were characterized and compared with other El Niños. Timescales of SOI and SST components ranged from sub-annual to multidecadal. The trajectories of SOI and SST components and the joint recurrence of annual components were dissipative toward chaotic attractors. Periodic, quasi-periodic, and chaotic regimes were present in the recurrence of trajectories, but chaos–chaos transitions dominated. Influenza pandemics occurred during chaotic regimes of significantly low transitivity dimension (p < 0.0001). El Niños lagged by influenza pandemics had distinct state space geometry (p < 0.0001). Chaotic dynamics explains the aperiodic timing, and varying duration and strength of El Niños. Coupling of all influenza pandemics of the past 140 years to chaotic regimes of low transitivity indicate that ENSO dynamics drives influenza pandemic dynamics. Forecasts models from ENSO dynamics should compliment surveillance for novel influenza viruses. PMID:29218303

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tessore, Nicolas; Metcalf, R. Benton; Winther, Hans A.

    A number of alternatives to general relativity exhibit gravitational screening in the non-linear regime of structure formation. We describe a set of algorithms that can produce weak lensing maps of large scale structure in such theories and can be used to generate mock surveys for cosmological analysis. By analysing a few basic statistics we indicate how these alternatives can be distinguished from general relativity with future weak lensing surveys.

  2. Defining pyromes and global syndromes of fire regimes

    PubMed Central

    Archibald, Sally; Lehmann, Caroline E. R.; Gómez-Dans, Jose L.; Bradstock, Ross A.

    2013-01-01

    Fire is a ubiquitous component of the Earth system that is poorly understood. To date, a global-scale understanding of fire is largely limited to the annual extent of burning as detected by satellites. This is problematic because fire is multidimensional, and focus on a single metric belies its complexity and importance within the Earth system. To address this, we identified five key characteristics of fire regimes—size, frequency, intensity, season, and extent—and combined new and existing global datasets to represent each. We assessed how these global fire regime characteristics are related to patterns of climate, vegetation (biomes), and human activity. Cross-correlations demonstrate that only certain combinations of fire characteristics are possible, reflecting fundamental constraints in the types of fire regimes that can exist. A Bayesian clustering algorithm identified five global syndromes of fire regimes, or pyromes. Four pyromes represent distinctions between crown, litter, and grass-fueled fires, and the relationship of these to biomes and climate are not deterministic. Pyromes were partially discriminated on the basis of available moisture and rainfall seasonality. Human impacts also affected pyromes and are globally apparent as the driver of a fifth and unique pyrome that represents human-engineered modifications to fire characteristics. Differing biomes and climates may be represented within the same pyrome, implying that pathways of change in future fire regimes in response to changes in climate and human activity may be difficult to predict. PMID:23559374

  3. Scaling in two-fluid pinch-off

    NASA Astrophysics Data System (ADS)

    Pommer, Chris; Suryo, Ronald; Subramani, Hariprasad; Harris, Michael; Basaran, Osman

    2009-11-01

    Two-fluid pinch-off is encountered when drops or bubbles of one fluid are ejected from a nozzle into another fluid or when a compound jet breaks. While the breakup of a drop in a passive environment and that of a passive bubble in a liquid are well understood, the physics of pinch-off when both the inner and outer fluids are dynamically active is inadequately understood. In this talk, the breakup of a compound jet whose core and shell are both incompressible Newtonian fluids is analyzed computationally by a method of lines ALE algorithm which uses finite elements with elliptic mesh generation for spatial discretization and adaptive finite differences for time integration. Pinch-off dynamics are investigated well beyond the limit of experiments set by the wavelength of visible light and that of various algorithms used in the literature. Simulations show that the minimum neck radius r initially scales with time τ before breakup as &αcirc; where α varies over a certain range. However, depending on the values of the governing dimensionless groups, this initial scaling regime may be transitory and, closer to pinch-off, the dynamics may transition to a final asymptotic regime for which r ˜&βcirc;, where β!=α.

  4. Measurement of high-energy neutron flux above ground utilizing a spallation based multiplicity technique

    DOE PAGES

    Roecker, Caleb; Bernstein, Adam; Marleau, Peter; ...

    2016-11-14

    Cosmogenic high-energy neutrons are a ubiquitous, difficult to shield, poorly measured background. Above ground the high-energy neutron energy-dependent flux has been measured, with significantly varying results. Below ground, high-energy neutron fluxes are largely unmeasured. Here we present a reconstruction algorithm to unfold the incident neutron energy-dependent flux measured using the Multiplicity and Recoil Spectrometer (MARS), simulated test cases to verify the algorithm, and provide a new measurement of the above ground high-energy neutron energy-dependent flux with a detailed systematic uncertainty analysis. Uncertainty estimates are provided based upon the measurement statistics, the incident angular distribution, the surrounding environment of the Montemore » Carlo model, and the MARS triggering efficiency. Quantified systematic uncertainty is dominated by the assumed incident neutron angular distribution and surrounding environment of the Monte Carlo model. The energy-dependent neutron flux between 90 MeV and 400 MeV is reported. Between 90 MeV and 250 MeV the MARS results are comparable to previous Bonner sphere measurements. Over the total energy regime measured, the MARS result are located within the span of previous measurements. Lastly, these results demonstrate the feasibility of future below ground measurements with MARS.« less

  5. Measurement of high-energy neutron flux above ground utilizing a spallation based multiplicity technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roecker, Caleb; Bernstein, Adam; Marleau, Peter

    Cosmogenic high-energy neutrons are a ubiquitous, difficult to shield, poorly measured background. Above ground the high-energy neutron energy-dependent flux has been measured, with significantly varying results. Below ground, high-energy neutron fluxes are largely unmeasured. Here we present a reconstruction algorithm to unfold the incident neutron energy-dependent flux measured using the Multiplicity and Recoil Spectrometer (MARS), simulated test cases to verify the algorithm, and provide a new measurement of the above ground high-energy neutron energy-dependent flux with a detailed systematic uncertainty analysis. Uncertainty estimates are provided based upon the measurement statistics, the incident angular distribution, the surrounding environment of the Montemore » Carlo model, and the MARS triggering efficiency. Quantified systematic uncertainty is dominated by the assumed incident neutron angular distribution and surrounding environment of the Monte Carlo model. The energy-dependent neutron flux between 90 MeV and 400 MeV is reported. Between 90 MeV and 250 MeV the MARS results are comparable to previous Bonner sphere measurements. Over the total energy regime measured, the MARS result are located within the span of previous measurements. Lastly, these results demonstrate the feasibility of future below ground measurements with MARS.« less

  6. On new scaling group of transformation for Prandtl-Eyring fluid model with both heat and mass transfer

    NASA Astrophysics Data System (ADS)

    Rehman, Khalil Ur; Malik, Aneeqa Ashfaq; Malik, M. Y.; Tahir, M.; Zehra, Iffat

    2018-03-01

    A short communication is structured to offer a set of scaling group of transformation for Prandtl-Eyring fluid flow yields by stretching flat porous surface. The fluid flow regime is carried with both heat and mass transfer characteristics. To seek solution of flow problem a set of scaling group of transformation is proposed by adopting Lie approach. These transformations are used to step down the partial differential equations into ordinary differential equations. The reduced system is solved by numerical method termed as shooting method. A self-coded algorithm is executed in this regard. The obtain results are elaborated by means of figures and tables.

  7. Developments in the CCP4 molecular-graphics project.

    PubMed

    Potterton, Liz; McNicholas, Stuart; Krissinel, Eugene; Gruber, Jan; Cowtan, Kevin; Emsley, Paul; Murshudov, Garib N; Cohen, Serge; Perrakis, Anastassis; Noble, Martin

    2004-12-01

    Progress towards structure determination that is both high-throughput and high-value is dependent on the development of integrated and automatic tools for electron-density map interpretation and for the analysis of the resulting atomic models. Advances in map-interpretation algorithms are extending the resolution regime in which fully automatic tools can work reliably, but at present human intervention is required to interpret poor regions of macromolecular electron density, particularly where crystallographic data is only available to modest resolution [for example, I/sigma(I) < 2.0 for minimum resolution 2.5 A]. In such cases, a set of manual and semi-manual model-building molecular-graphics tools is needed. At the same time, converting the knowledge encapsulated in a molecular structure into understanding is dependent upon visualization tools, which must be able to communicate that understanding to others by means of both static and dynamic representations. CCP4 mg is a program designed to meet these needs in a way that is closely integrated with the ongoing development of CCP4 as a program suite suitable for both low- and high-intervention computational structural biology. As well as providing a carefully designed user interface to advanced algorithms of model building and analysis, CCP4 mg is intended to present a graphical toolkit to developers of novel algorithms in these fields.

  8. Inverse transport problems in quantitative PAT for molecular imaging

    NASA Astrophysics Data System (ADS)

    Ren, Kui; Zhang, Rongting; Zhong, Yimin

    2015-12-01

    Fluorescence photoacoustic tomography (fPAT) is a molecular imaging modality that combines photoacoustic tomography with fluorescence imaging to obtain high-resolution imaging of fluorescence distributions inside heterogeneous media. The objective of this work is to study inverse problems in the quantitative step of fPAT where we intend to reconstruct physical coefficients in a coupled system of radiative transport equations using internal data recovered from ultrasound measurements. We derive uniqueness and stability results on the inverse problems and develop some efficient algorithms for image reconstructions. Numerical simulations based on synthetic data are presented to validate the theoretical analysis. The results we present here complement these in Ren K and Zhao H (2013 SIAM J. Imaging Sci. 6 2024-49) on the same problem but in the diffusive regime.

  9. Design of bifunctional metasurface based on independent control of transmission and reflection.

    PubMed

    Zhuang, Yaqiang; Wang, Guangming; Cai, Tong; Zhang, Qingfeng

    2018-02-05

    Multifunctional metasurface integrating different functions can significantly save the occupied space, although most of bifunctional metasurfaces reported to date only control the wave in either reflection or transmission regime. In this paper, we propose a scheme that allows one to independently control the reflection and transmission wavefront under orthogonal polarizations. For demonstration, we design a bifunctional metasurface that simultaneously realizes a diffusion reflection and a focusing transmission. The diffusion reflection is realized using a random phase distribution, which was implemented by randomly arranging two basic coding unit cells with the aid of an ergodic algorithm. Meanwhile, the hyperbolic phase distribution was designed to realize the focusing functionality in the transmission regime. To further show the potential applications, a high-gain lens antenna was designed by assembling the proposed metasurface with a proper feed. Both simulation and measurement results have been carried out, and the agreement between the two results demonstrates the validity of the performance as expected. The backward scattering can be reduced more than 5 dB within 6.4-10 GHz compared with the metallic plate. Moreover, the lens antenna has a gain of 20 dB (with around 13 dB enhancement in comparison with the bare feeding antenna) and an efficiency of 32.5%.

  10. Multi-Scale Correlative Tomography of a Li-Ion Battery Composite Cathode

    PubMed Central

    Moroni, Riko; Börner, Markus; Zielke, Lukas; Schroeder, Melanie; Nowak, Sascha; Winter, Martin; Manke, Ingo; Zengerle, Roland; Thiele, Simon

    2016-01-01

    Focused ion beam/scanning electron microscopy tomography (FIB/SEMt) and synchrotron X-ray tomography (Xt) are used to investigate the same lithium manganese oxide composite cathode at the same specific spot. This correlative approach allows the investigation of three central issues in the tomographic analysis of composite battery electrodes: (i) Validation of state-of-the-art binary active material (AM) segmentation: Although threshold segmentation by standard algorithms leads to very good segmentation results, limited Xt resolution results in an AM underestimation of 6 vol% and severe overestimation of AM connectivity. (ii) Carbon binder domain (CBD) segmentation in Xt data: While threshold segmentation cannot be applied for this purpose, a suitable classification method is introduced. Based on correlative tomography, it allows for reliable ternary segmentation of Xt data into the pore space, CBD, and AM. (iii) Pore space analysis in the micrometer regime: This segmentation technique is applied to an Xt reconstruction with several hundred microns edge length, thus validating the segmentation of pores within the micrometer regime for the first time. The analyzed cathode volume exhibits a bimodal pore size distribution in the ranges between 0–1 μm and 1–12 μm. These ranges can be attributed to different pore formation mechanisms. PMID:27456201

  11. The Edge-Disjoint Path Problem on Random Graphs by Message-Passing.

    PubMed

    Altarelli, Fabrizio; Braunstein, Alfredo; Dall'Asta, Luca; De Bacco, Caterina; Franz, Silvio

    2015-01-01

    We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length.

  12. The Edge-Disjoint Path Problem on Random Graphs by Message-Passing

    PubMed Central

    2015-01-01

    We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length. PMID:26710102

  13. Fundamental Algorithms of the Goddard Battery Model

    NASA Technical Reports Server (NTRS)

    Jagielski, J. M.

    1985-01-01

    The Goddard Space Flight Center (GSFC) is currently producing a computer model to predict Nickel Cadmium (NiCd) performance in a Low Earth Orbit (LEO) cycling regime. The model proper is currently still in development, but the inherent, fundamental algorithms (or methodologies) of the model are defined. At present, the model is closely dependent on empirical data and the data base currently used is of questionable accuracy. Even so, very good correlations have been determined between model predictions and actual cycling data. A more accurate and encompassing data base has been generated to serve dual functions: show the limitations of the current data base, and be inbred in the model properly for more accurate predictions. The fundamental algorithms of the model, and the present data base and its limitations, are described and a brief preliminary analysis of the new data base and its verification of the model's methodology are presented.

  14. From the physics of interacting polymers to optimizing routes on the London Underground

    PubMed Central

    Yeung, Chi Ho; Saad, David; Wong, K. Y. Michael

    2013-01-01

    Optimizing paths on networks is crucial for many applications, ranging from subway traffic to Internet communication. Because global path optimization that takes account of all path choices simultaneously is computationally hard, most existing routing algorithms optimize paths individually, thus providing suboptimal solutions. We use the physics of interacting polymers and disordered systems to analyze macroscopic properties of generic path optimization problems and derive a simple, principled, generic, and distributed routing algorithm capable of considering all individual path choices simultaneously. We demonstrate the efficacy of the algorithm by applying it to: (i) random graphs resembling Internet overlay networks, (ii) travel on the London Underground network based on Oyster card data, and (iii) the global airport network. Analytically derived macroscopic properties give rise to insightful new routing phenomena, including phase transitions and scaling laws, that facilitate better understanding of the appropriate operational regimes and their limitations, which are difficult to obtain otherwise. PMID:23898198

  15. From the physics of interacting polymers to optimizing routes on the London Underground.

    PubMed

    Yeung, Chi Ho; Saad, David; Wong, K Y Michael

    2013-08-20

    Optimizing paths on networks is crucial for many applications, ranging from subway traffic to Internet communication. Because global path optimization that takes account of all path choices simultaneously is computationally hard, most existing routing algorithms optimize paths individually, thus providing suboptimal solutions. We use the physics of interacting polymers and disordered systems to analyze macroscopic properties of generic path optimization problems and derive a simple, principled, generic, and distributed routing algorithm capable of considering all individual path choices simultaneously. We demonstrate the efficacy of the algorithm by applying it to: (i) random graphs resembling Internet overlay networks, (ii) travel on the London Underground network based on Oyster card data, and (iii) the global airport network. Analytically derived macroscopic properties give rise to insightful new routing phenomena, including phase transitions and scaling laws, that facilitate better understanding of the appropriate operational regimes and their limitations, which are difficult to obtain otherwise.

  16. Examining the NZESM Cloud representation with Self Organizing Maps

    NASA Astrophysics Data System (ADS)

    Schuddeboom, Alex; McDonald, Adrian; Parsons, Simon; Morgenstern, Olaf; Harvey, Mike

    2017-04-01

    Several different cloud regimes are identified from MODIS satellite data and the representation of these regimes within the New Zealand Earth System Model (NZESM) is examined. For the development of our cloud classification we utilize a neural network algorithm known as self organizing maps (SOMs) on MODIS cloud top pressure - cloud optical thickness joint histograms. To evaluate the representation of the cloud within NZESM, the frequency and geographical distribution of the regimes is compared between the NZESM and satellite data. This approach has the advantage of not only identifying differences, but also potentially giving additional information about the discrepancy such as in which regions or phases of cloud the differences are most prominent. To allow for a more direct comparison between datasets, the COSP satellite simulation software is applied to NZESM output. COSP works by simulating the observational processes linked to a satellite, within the GCM, so that data can be generated in a way that shares the particular observational bias of specific satellites. By taking the COSP joint histograms and comparing them to our existing classifications we can easily search for discrepancies between the observational data and the simulations without having to be cautious of biases introduced by the satellite. Preliminary results, based on data for 2008, show a significant decrease in overall cloud fraction in the NZESM compared to the MODIS satellite data. To better understand the nature of this discrepancy, the cloud fraction related to different cloud heights and phases were also analysed.

  17. Concentration Measurements in Self-Excited, Momentum-Dominated Helium Jets

    NASA Technical Reports Server (NTRS)

    Yildirim, Bekir Sedat

    2004-01-01

    Flow structure of momentum-dominated pure helium jets discharged vertically into ambient air was investigated using high-speed rainbow schlieren deflectometry (RSD) technique. Effects of the operating parameters, i.e., Reynolds number (Re) and Richardson number (Ri), on the oscillatory behavior of the flow were examined over a range of experimental conditions. To seek the individual effect of these parameters, one of them was fixed and the other was varied with certain constraints. Measurements revealed highly periodic oscillations in the laminar region as well as high regularity in transition and turbulent regions. Maximum spectral power profiles at different axial locations indicated the oscillation amplitude increasing until the breakdown of the jet in the turbulent regime. The transition from the laminar to turbulent flow was also investigated. Fast Fourier transform analysis performed in the transition regime showed that the flow oscillates at a unique frequency, which was the same in the upstream laminar flow region. Measured deflection angle data were used in Abel inversion algorithm to construct the helium concentration fields. Instantaneous helium concentration contours revealed changes in the flow structure and evolution of vortical structures during an oscillation cycle. Temporal evolution plots of helium concentration at different axial location showed repeatable oscillations at all axial and radial locations up to the turbulent regime. A cross-correlation technique, applied to find the spatial displacements of the vortical structures, provided correlation coefficient peaks between consecutive schlieren images. Results show that the vortical structure convected and accelerated only in the axial direction.

  18. Delineating parameter unidentifiabilities in complex models

    NASA Astrophysics Data System (ADS)

    Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis

    2017-03-01

    Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.

  19. Gas-kinetic unified algorithm for hypersonic flows covering various flow regimes solving Boltzmann model equation in nonequilibrium effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Zhihui; Ma, Qiang; Wu, Junlin

    2014-12-09

    Based on the Gas-Kinetic Unified Algorithm (GKUA) directly solving the Boltzmann model equation, the effect of rotational non-equilibrium is investigated recurring to the kinetic Rykov model with relaxation property of rotational degrees of freedom. The spin movement of diatomic molecule is described by moment of inertia, and the conservation of total angle momentum is taken as a new Boltzmann collision invariant. The molecular velocity distribution function is integrated by the weight factor on the internal energy, and the closed system of two kinetic controlling equations is obtained with inelastic and elastic collisions. The optimization selection technique of discrete velocity ordinatemore » points and numerical quadrature rules for macroscopic flow variables with dynamic updating evolvement are developed to simulate hypersonic flows, and the gas-kinetic numerical scheme is constructed to capture the time evolution of the discretized velocity distribution functions. The gas-kinetic boundary conditions in thermodynamic non-equilibrium and numerical procedures are studied and implemented by directly acting on the velocity distribution function, and then the unified algorithm of Boltzmann model equation involving non-equilibrium effect is presented for the whole range of flow regimes. The hypersonic flows involving non-equilibrium effect are numerically simulated including the inner flows of shock wave structures in nitrogen with different Mach numbers of 1.5-Ma-25, the planar ramp flow with the whole range of Knudsen numbers of 0.0009-Kn-10 and the three-dimensional re-entering flows around tine double-cone body.« less

  20. Multidimensional, fully implicit, exactly conserving electromagnetic particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Chacon, Luis

    2015-09-01

    We discuss a new, conservative, fully implicit 2D-3V particle-in-cell algorithm for non-radiative, electromagnetic kinetic plasma simulations, based on the Vlasov-Darwin model. Unlike earlier linearly implicit PIC schemes and standard explicit PIC schemes, fully implicit PIC algorithms are unconditionally stable and allow exact discrete energy and charge conservation. This has been demonstrated in 1D electrostatic and electromagnetic contexts. In this study, we build on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the Darwin field and particle orbit equations for multiple species in multiple dimensions. The Vlasov-Darwin model is very attractive for PIC simulations because it avoids radiative noise issues in non-radiative electromagnetic regimes. The algorithm conserves global energy, local charge, and particle canonical-momentum exactly, even with grid packing. The nonlinear iteration is effectively accelerated with a fluid preconditioner, which allows efficient use of large timesteps, O(√{mi/me}c/veT) larger than the explicit CFL. In this presentation, we will introduce the main algorithmic components of the approach, and demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 1D and 2D. Support from the LANL LDRD program and the DOE-SC ASCR office.

  1. Towards the Consideration of Surface and Environment variables for a Microwave Precipitation Algorithm Over Land

    NASA Astrophysics Data System (ADS)

    Wang, N. Y.; You, Y.; Ferraro, R. R.; Guch, I.

    2014-12-01

    Microwave satellite remote sensing of precipitation over land is a challenging problem due to the highly variable land surface emissivity, which, if not properly accounted for, can be much greater than the precipitation signal itself, especially in light rain/snow conditions. Additionally, surfaces such as arid land, deserts and snow cover have brightness temperatures characteristics similar to precipitation Ongoing work by NASA's GPM microwave radiometer team is constructing databases for the GPROF algorithm through a variety of means, however, there is much uncertainty as to what is the optimal information needed for the wide array of sensors in the GPM constellation, including examination of regional conditions. The at-launch database focuses on stratification by emissivity class, surface temperature and total precipitable water (TPW). We'll perform sensitivity studies to determine the potential role of environmental factors such as land surface temperature, surface elevation, and relative humidity and storm morphology such as storm vertical structure, height, and ice thickness to improve precipitation estimation over land, including rain and snow. In other words, what information outside of the satellite radiances can help describe the background and subsequent departures from it that are active precipitating regions? It is likely that this information will be a function of the various precipitation regimes. Statistical methods such as Principal Component Analysis (PCA) will be utilized in this task. Databases from a variety of sources are being constructed. They include existing satellite microwave measurements of precipitating and non-precipitating conditions, ground radar precipitation rate estimates, surface emissivity climatology from satellites, surface temperature and TPW from NWP reanalysis. Results from the analysis of these databases with respect to the microwave precipitation sensitivity to the variety of environmental conditions in different climate regimes will be discussed.

  2. Parameter optimization for surface flux transport models

    NASA Astrophysics Data System (ADS)

    Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.

    2017-11-01

    Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.

  3. Development of antibiotic regimens using graph based evolutionary algorithms.

    PubMed

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Particle rejuvenation of Rao-Blackwellized sequential Monte Carlo smoothers for conditionally linear and Gaussian models

    NASA Astrophysics Data System (ADS)

    Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric

    2017-12-01

    This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.

  5. A Computational Framework for Analyzing Stochasticity in Gene Expression

    PubMed Central

    Sherman, Marc S.; Cohen, Barak A.

    2014-01-01

    Stochastic fluctuations in gene expression give rise to distributions of protein levels across cell populations. Despite a mounting number of theoretical models explaining stochasticity in protein expression, we lack a robust, efficient, assumption-free approach for inferring the molecular mechanisms that underlie the shape of protein distributions. Here we propose a method for inferring sets of biochemical rate constants that govern chromatin modification, transcription, translation, and RNA and protein degradation from stochasticity in protein expression. We asked whether the rates of these underlying processes can be estimated accurately from protein expression distributions, in the absence of any limiting assumptions. To do this, we (1) derived analytical solutions for the first four moments of the protein distribution, (2) found that these four moments completely capture the shape of protein distributions, and (3) developed an efficient algorithm for inferring gene expression rate constants from the moments of protein distributions. Using this algorithm we find that most protein distributions are consistent with a large number of different biochemical rate constant sets. Despite this degeneracy, the solution space of rate constants almost always informs on underlying mechanism. For example, we distinguish between regimes where transcriptional bursting occurs from regimes reflecting constitutive transcript production. Our method agrees with the current standard approach, and in the restrictive regime where the standard method operates, also identifies rate constants not previously obtainable. Even without making any assumptions we obtain estimates of individual biochemical rate constants, or meaningful ratios of rate constants, in 91% of tested cases. In some cases our method identified all of the underlying rate constants. The framework developed here will be a powerful tool for deducing the contributions of particular molecular mechanisms to specific patterns of gene expression. PMID:24811315

  6. Clustering and assembly dynamics of a one-dimensional microphase former.

    PubMed

    Hu, Yi; Charbonneau, Patrick

    2018-05-23

    Both ordered and disordered microphases ubiquitously form in suspensions of particles that interact through competing short-range attraction and long-range repulsion (SALR). While ordered microphases are more appealing materials targets, understanding the rich structural and dynamical properties of their disordered counterparts is essential to controlling their mesoscale assembly. Here, we study the disordered regime of a one-dimensional (1D) SALR model, whose simplicity enables detailed analysis by transfer matrices and Monte Carlo simulations. We first characterize the signature of the clustering process on macroscopic observables, and then assess the equilibration dynamics of various simulation algorithms. We notably find that cluster moves markedly accelerate the mixing time, but that event chains are of limited help in the clustering regime. These insights will inspire further study of three-dimensional microphase formers.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias

    In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or themore » giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.« less

  8. Towards a minimal stochastic model for a large class of diffusion-reactions on biological membranes.

    PubMed

    Chevalier, Michael W; El-Samad, Hana

    2012-08-28

    Diffusion of biological molecules on 2D biological membranes can play an important role in the behavior of stochastic biochemical reaction systems. Yet, we still lack a fundamental understanding of circumstances where explicit accounting of the diffusion and spatial coordinates of molecules is necessary. In this work, we illustrate how time-dependent, non-exponential reaction probabilities naturally arise when explicitly accounting for the diffusion of molecules. We use the analytical expression of these probabilities to derive a novel algorithm which, while ignoring the exact position of the molecules, can still accurately capture diffusion effects. We investigate the regions of validity of the algorithm and show that for most parameter regimes, it constitutes an accurate framework for studying these systems. We also document scenarios where large spatial fluctuation effects mandate explicit consideration of all the molecules and their positions. Taken together, our results derive a fundamental understanding of the role of diffusion and spatial fluctuations in these systems. Simultaneously, they provide a general computational methodology for analyzing a broad class of biological networks whose behavior is influenced by diffusion on membranes.

  9. Dynamics in the Fitness-Income plane: Brazilian states vs World countries

    PubMed Central

    Operti, Felipe G.; Pugliese, Emanuele; Andrade, José S.; Pietronero, Luciano

    2018-01-01

    In this paper we introduce a novel algorithm, called Exogenous Fitness, to calculate the Fitness of subnational entities and we apply it to the states of Brazil. In the last decade, several indices were introduced to measure the competitiveness of countries by looking at the complexity of their export basket. Tacchella et al (2012) developed a non-monetary metric called Fitness. In this paper, after an overview about Brazil as a whole and the comparison with the other BRIC countries, we introduce a new methodology based on the Fitness algorithm, called Exogenous Fitness. Combining the results with the Gross Domestic Product per capita (GDPp), we look at the dynamics of the Brazilian states in the Fitness-Income plane. Two regimes are distinguishable: one with high predictability and the other with low predictability, showing a deep analogy with the heterogeneous dynamics of the World countries. Furthermore, we compare the ranking of the Brazilian states according to the Exogenous Fitness with the ranking obtained through two other techniques, namely Endogenous Fitness and Economic Complexity Index. PMID:29874265

  10. Dynamics in the Fitness-Income plane: Brazilian states vs World countries.

    PubMed

    Operti, Felipe G; Pugliese, Emanuele; Andrade, José S; Pietronero, Luciano; Gabrielli, Andrea

    2018-01-01

    In this paper we introduce a novel algorithm, called Exogenous Fitness, to calculate the Fitness of subnational entities and we apply it to the states of Brazil. In the last decade, several indices were introduced to measure the competitiveness of countries by looking at the complexity of their export basket. Tacchella et al (2012) developed a non-monetary metric called Fitness. In this paper, after an overview about Brazil as a whole and the comparison with the other BRIC countries, we introduce a new methodology based on the Fitness algorithm, called Exogenous Fitness. Combining the results with the Gross Domestic Product per capita (GDPp), we look at the dynamics of the Brazilian states in the Fitness-Income plane. Two regimes are distinguishable: one with high predictability and the other with low predictability, showing a deep analogy with the heterogeneous dynamics of the World countries. Furthermore, we compare the ranking of the Brazilian states according to the Exogenous Fitness with the ranking obtained through two other techniques, namely Endogenous Fitness and Economic Complexity Index.

  11. A least-squares parameter estimation algorithm for switched hammerstein systems with applications to the VOR

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Kearney, Robert E.; Galiana, Henrietta L.

    2005-01-01

    A "Multimode" or "switched" system is one that switches between various modes of operation. When a switch occurs from one mode to another, a discontinuity may result followed by a smooth evolution under the new regime. Characterizing the switching behavior of these systems is not well understood and, therefore, identification of multimode systems typically requires a preprocessing step to classify the observed data according to a mode of operation. A further consequence of the switched nature of these systems is that data available for parameter estimation of any subsystem may be inadequate. As such, identification and parameter estimation of multimode systems remains an unresolved problem. In this paper, we 1) show that the NARMAX model structure can be used to describe the impulsive-smooth behavior of switched systems, 2) propose a modified extended least squares (MELS) algorithm to estimate the coefficients of such models, and 3) demonstrate its applicability to simulated and real data from the Vestibulo-Ocular Reflex (VOR). The approach will also allow the identification of other nonlinear bio-systems, suspected of containing "hard" nonlinearities.

  12. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier Science B.V. All rights reserved.

  13. Statistics of vacuum breakdown in the high-gradient and low-rate regime

    NASA Astrophysics Data System (ADS)

    Wuensch, Walter; Degiovanni, Alberto; Calatroni, Sergio; Korsbäck, Anders; Djurabekova, Flyura; Rajamäki, Robin; Giner-Navarro, Jorge

    2017-01-01

    In an increasing number of high-gradient linear accelerator applications, accelerating structures must operate with both high surface electric fields and low breakdown rates. Understanding the statistical properties of breakdown occurrence in such a regime is of practical importance for optimizing accelerator conditioning and operation algorithms, as well as of interest for efforts to understand the physical processes which underlie the breakdown phenomenon. Experimental data of breakdown has been collected in two distinct high-gradient experimental set-ups: A prototype linear accelerating structure operated in the Compact Linear Collider Xbox 12 GHz test stands, and a parallel plate electrode system operated with pulsed DC in the kV range. Collected data is presented, analyzed and compared. The two systems show similar, distinctive, two-part distributions of number of pulses between breakdowns, with each part corresponding to a specific, constant event rate. The correlation between distance and number of pulses between breakdown indicates that the two parts of the distribution, and their corresponding event rates, represent independent primary and induced follow-up breakdowns. The similarity of results from pulsed DC to 12 GHz rf indicates a similar vacuum arc triggering mechanism over the range of conditions covered by the experiments.

  14. A Linear Dynamical Systems Approach to Streamflow Reconstruction Reveals History of Regime Shifts in Northern Thailand

    NASA Astrophysics Data System (ADS)

    Nguyen, Hung T. T.; Galelli, Stefano

    2018-03-01

    Catchment dynamics is not often modeled in streamflow reconstruction studies; yet, the streamflow generation process depends on both catchment state and climatic inputs. To explicitly account for this interaction, we contribute a linear dynamic model, in which streamflow is a function of both catchment state (i.e., wet/dry) and paleoclimatic proxies. The model is learned using a novel variant of the Expectation-Maximization algorithm, and it is used with a paleo drought record—the Monsoon Asia Drought Atlas—to reconstruct 406 years of streamflow for the Ping River (northern Thailand). Results for the instrumental period show that the dynamic model has higher accuracy than conventional linear regression; all performance scores improve by 45-497%. Furthermore, the reconstructed trajectory of the state variable provides valuable insights about the catchment history—e.g., regime-like behavior—thereby complementing the information contained in the reconstructed streamflow time series. The proposed technique can replace linear regression, since it only requires information on streamflow and climatic proxies (e.g., tree-rings, drought indices); furthermore, it is capable of readily generating stochastic streamflow replicates. With a marginal increase in computational requirements, the dynamic model brings more desirable features and value to streamflow reconstructions.

  15. Numerical heating in Particle-In-Cell simulations with Monte Carlo binary collisions

    NASA Astrophysics Data System (ADS)

    Alves, E. Paulo; Mori, Warren; Fiuza, Frederico

    2017-10-01

    The binary Monte Carlo collision (BMCC) algorithm is a robust and popular method to include Coulomb collision effects in Particle-in-Cell (PIC) simulations of plasmas. While a number of works have focused on extending the validity of the model to different physical regimes of temperature and density, little attention has been given to the fundamental coupling between PIC and BMCC algorithms. Here, we show that the coupling between PIC and BMCC algorithms can give rise to (nonphysical) numerical heating of the system, that can be far greater than that observed when these algorithms operate independently. This deleterious numerical heating effect can significantly impact the evolution of the simulated system particularly for long simulation times. In this work, we describe the source of this numerical heating, and derive scaling laws for the numerical heating rates based on the numerical parameters of PIC-BMCC simulations. We compare our theoretical scalings with PIC-BMCC numerical experiments, and discuss strategies to minimize this parasitic effect. This work is supported by DOE FES under FWP 100237 and 100182.

  16. Development of Finer Spatial Resolution Optical Properties from MODIS

    DTIC Science & Technology

    2008-02-04

    infrared (SWIR) channels at 1240 nm and 2130 run. The increased resolution spectral Rrs channels are input into bio-optical algorithms (Quasi...processes. Additionally, increased resolution is required for validation of ocean color products in coastal regions due to the shorter spatial scales of...with in situ Rrs data to determine the "best" method in coastal regimes. We demonstrate that finer resolution is required for validation of coastal

  17. Experimental demonstration of a format-flexible single-carrier coherent receiver using data-aided digital signal processing.

    PubMed

    Elschner, Robert; Frey, Felix; Meuer, Christian; Fischer, Johannes Karl; Alreesh, Saleem; Schmidt-Langhorst, Carsten; Molle, Lutz; Tanimura, Takahito; Schubert, Colja

    2012-12-17

    We experimentally demonstrate the use of data-aided digital signal processing for format-flexible coherent reception of different 28-GBd PDM and 4D modulated signals in WDM transmission experiments over up to 7680 km SSMF by using the same resource-efficient digital signal processing algorithms for the equalization of all formats. Stable and regular performance in the nonlinear transmission regime is confirmed.

  18. On the continuity of mean total normal stress in geometrical multiscale cardiovascular problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanco, Pablo J., E-mail: pjblanco@lncc.br; INCT-MACC, Instituto Nacional de Ciência e Tecnologia em Medicina Assistida por Computação Científica, Petrópolis; Deparis, Simone, E-mail: simone.deparis@epfl.ch

    2013-10-15

    In this work an iterative strategy to implicitly couple dimensionally-heterogeneous blood flow models accounting for the continuity of mean total normal stress at interface boundaries is developed. Conservation of mean total normal stress in the coupling of heterogeneous models is mandatory to satisfy energetic consistency between them. Nevertheless, existing methodologies are based on modifications of the Navier–Stokes variational formulation, which are undesired when dealing with fluid–structure interaction or black box codes. The proposed methodology makes possible to couple one-dimensional and three-dimensional fluid–structure interaction models, enforcing the continuity of mean total normal stress while just imposing flow rate data or evenmore » the classical Neumann boundary data to the models. This is accomplished by modifying an existing iterative algorithm, which is also able to account for the continuity of the vessel area, when required. Comparisons are performed to assess differences in the convergence properties of the algorithms when considering the continuity of mean normal stress and the continuity of mean total normal stress for a wide range of flow regimes. Finally, examples in the physiological regime are shown to evaluate the importance, or not, of considering the continuity of mean total normal stress in hemodynamics simulations.« less

  19. Time and space analysis of turbulence of gravity surface waves

    NASA Astrophysics Data System (ADS)

    Mordant, Nicolas; Aubourg, Quentin; Viboud, Samuel; Sommeria, Joel

    2016-11-01

    Wave turbulence is a statistical state made of a very large number of nonlinearly interacting waves. The Weak Turbulence Theory was developed to describe such a situation in the weakly nonlinear regime. Although, oceanic data tend to be compatible with the theory, laboratory data fail to fulfill the theoretical predictions. A space-time resolved measurement of the waves have proven to be especially fruitful to identify the mechanism at play in turbulence of gravity-capillary waves. We developed an image processing algorithm to measure the motion of the surface of water with both space and time resolution. We first seed the surface with slightly buoyant polystyrene particles and use 3 cameras to reconstruct the surface. Our stereoscopic algorithm is coupled to PIV so that to obtain both the surface deformation and the velocity of the water surface. Such a coupling is shown to improve the sensitivity of the measurement by one order of magnitude. We use this technique to probe the existence of weakly nonlinear turbulence excited by two small wedge wavemakers in a 13-m diameter wave flume. We observe a truly weakly nonlinear regime of isotropic wave turbulence. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No 647018-WATU).

  20. Parallel Fokker–Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Küchlin, Stephan, E-mail: kuechlin@ifd.mavt.ethz.ch; Jenny, Patrick

    2017-01-01

    A major challenge for the conventional Direct Simulation Monte Carlo (DSMC) technique lies in the fact that its computational cost becomes prohibitive in the near continuum regime, where the Knudsen number (Kn)—characterizing the degree of rarefaction—becomes small. In contrast, the Fokker–Planck (FP) based particle Monte Carlo scheme allows for computationally efficient simulations of rarefied gas flows in the low and intermediate Kn regime. The Fokker–Planck collision operator—instead of performing binary collisions employed by the DSMC method—integrates continuous stochastic processes for the phase space evolution in time. This allows for time step and grid cell sizes larger than the respective collisionalmore » scales required by DSMC. Dynamically switching between the FP and the DSMC collision operators in each computational cell is the basis of the combined FP-DSMC method, which has been proven successful in simulating flows covering the whole Kn range. Until recently, this algorithm had only been applied to two-dimensional test cases. In this contribution, we present the first general purpose implementation of the combined FP-DSMC method. Utilizing both shared- and distributed-memory parallelization, this implementation provides the capability for simulations involving many particles and complex geometries by exploiting state of the art computer cluster technologies.« less

  1. Variation of surface ozone in Campo Grande, Brazil: meteorological effect analysis and prediction.

    PubMed

    Pires, J C M; Souza, A; Pavão, H G; Martins, F G

    2014-09-01

    The effect of meteorological variables on surface ozone (O3) concentrations was analysed based on temporal variation of linear correlation and artificial neural network (ANN) models defined by genetic algorithms (GAs). ANN models were also used to predict the daily average concentration of this air pollutant in Campo Grande, Brazil. Three methodologies were applied using GAs, two of them considering threshold models. In these models, the variables selected to define different regimes were daily average O3 concentration, relative humidity and solar radiation. The threshold model that considers two O3 regimes was the one that correctly describes the effect of important meteorological variables in O3 behaviour, presenting also a good predictive performance. Solar radiation, relative humidity and rainfall were considered significant for both O3 regimes; however, wind speed (dispersion effect) was only significant for high concentrations. According to this model, high O3 concentrations corresponded to high solar radiation, low relative humidity and wind speed. This model showed to be a powerful tool to interpret the O3 behaviour, being useful to define policy strategies for human health protection regarding air pollution.

  2. Air-sea interaction regimes in the sub-Antarctic Southern Ocean and Antarctic marginal ice zone revealed by icebreaker measurements

    NASA Astrophysics Data System (ADS)

    Yu, Lisan; Jin, Xiangze; Schulz, Eric W.; Josey, Simon A.

    2017-08-01

    This study analyzed shipboard air-sea measurements acquired by the icebreaker Aurora Australis during its off-winter operation in December 2010 to May 2012. Mean conditions over 7 months (October-April) were compiled from a total of 22 ship tracks. The icebreaker traversed the water between Hobart, Tasmania, and the Antarctic continent, providing valuable in situ insight into two dynamically important, yet poorly sampled, regimes: the sub-Antarctic Southern Ocean and the Antarctic marginal ice zone (MIZ) in the Indian Ocean sector. The transition from the open water to the ice-covered surface creates sharp changes in albedo, surface roughness, and air temperature, leading to consequential effects on air-sea variables and fluxes. Major effort was made to estimate the air-sea fluxes in the MIZ using the bulk flux algorithms that are tuned specifically for the sea-ice effects, while computing the fluxes over the sub-Antarctic section using the COARE3.0 algorithm. The study evidenced strong sea-ice modulations on winds, with the southerly airflow showing deceleration (convergence) in the MIZ and acceleration (divergence) when moving away from the MIZ. Marked seasonal variations in heat exchanges between the atmosphere and the ice margin were noted. The monotonic increase in turbulent latent and sensible heat fluxes after summer turned the MIZ quickly into a heat loss regime, while at the same time the sub-Antarctic surface water continued to receive heat from the atmosphere. The drastic increase in turbulent heat loss in the MIZ contrasted sharply to the nonsignificant and seasonally invariant turbulent heat loss over the sub-Antarctic open water.Plain Language SummaryThe icebreaker Aurora Australis is a research and supply vessel that is regularly chartered by the Australian Antarctic Division during the southern summer to operate in waters between Hobart, Tasmania, and Antarctica. The vessel serves as the main lifeline to three permanent research stations on the Antarctic continents and provide critical support for Australia's Southern Ocean research operation. Automated meteorological measurement systems are deployed onboard the vessel, providing routine observations of wind, air and sea temperature, humidity, pressure, precipitation and long- and short-wave radiation. Two climatically important regimes are sampled as the icebreaker sails across the sub-Antarctic Southern Ocean and traverses the marginal region of the East Antarctic continent. One regime is the Antarctic Circumpolar Current (ACC) system where strong westerly winds are featured. The other is the Antarctic seasonal marginal ice zone (MIZ), i.e., the narrow transition zone that connects the ice-free sub-Antarctic with the Antarctic ice-covered regime. Observing the remote Southern Ocean has been historically challenging due to the cost realities and logistical difficulties. The shipboard and near-surface meteorological measurements offer a rare and valuable opportunity for gaining an in situ insight into the air-sea heat and momentum exchange in two poorly sampled yet dynamically important regimes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20180002026','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20180002026"><span>PEG Enhancement for EM1 and EM2+ Missions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt</p> <p>2018-01-01</p> <p>NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. The next evolution of SLS, the Block-1B Exploration Mission 2 (EM-2), is currently being designed. The Block-1 and Block-1B vehicles will use the Powered Explicit Guidance (PEG) algorithm. Due to the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS), certain enhancements to the Block-1 PEG algorithm are needed to perform Block-1B missions. In order to accommodate mission design for EM-2 and beyond, PEG has been significantly improved since its use on the Space Shuttle program. The current version of PEG has the ability to switch to different targets during Core Stage (CS) or EUS flight, and can automatically reconfigure for a single Engine Out (EO) scenario, loss of communication with the Launch Abort System (LAS), and Inertial Navigation System (INS) failure. The Thrust Factor (TF) algorithm uses measured state information in addition to a priori parameters, providing PEG with an improved estimate of propulsion information. This provides robustness against unknown or undetected engine failures. A loft parameter input allows LAS jettison while maximizing payload mass. The current PEG algorithm is now able to handle various classes of missions with burn arcs much longer than were seen in the shuttle program. These missions include targeting a circular LEO orbit with a low-thrust, long-burn-duration upper stage, targeting a highly eccentric Trans-Lunar Injection (TLI) orbit, targeting a disposal orbit using the low-thrust Reaction Control System (RCS), and targeting a hyperbolic orbit. This paper will describe the design and implementation of the TF algorithm, the strategy to handle EO in various flight regimes, algorithms to cover off-nominal conditions, and other enhancements to the Block-1 PEG algorithm. This paper illustrates challenges posed by the Block-1B vehicle, and results show that the improved PEG algorithm is capable for use on the SLS Block 1-B vehicle as part of the Guidance, Navigation, and Control System.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhFl...29j5107T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhFl...29j5107T"><span>Late-time growth rate, mixing, and anisotropy in the multimode narrowband Richtmyer-Meshkov instability: The θ-group collaboration</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Thornber, B.; Griffond, J.; Poujade, O.; Attal, N.; Varshochi, H.; Bigdelou, P.; Ramaprabhu, P.; Olson, B.; Greenough, J.; Zhou, Y.; Schilling, O.; Garside, K. A.; Williams, R. J. R.; Batha, C. A.; Kuchugov, P. A.; Ladonkina, M. E.; Tishkin, V. F.; Zmitrenko, N. V.; Rozanov, V. B.; Youngs, D. L.</p> <p>2017-10-01</p> <p>Turbulent Richtmyer-Meshkov instability (RMI) is investigated through a series of high resolution three-dimensional simulations of two initial conditions with eight independent codes. The simulations are initialised with a narrowband perturbation such that instability growth is due to non-linear coupling/backscatter from the energetic modes, thus generating the lowest expected growth rate from a pure RMI. By independently assessing the results from each algorithm and computing ensemble averages of multiple algorithms, the results allow a quantification of key flow properties as well as the uncertainty due to differing numerical approaches. A new analytical model predicting the initial layer growth for a multimode narrowband perturbation is presented, along with two models for the linear and non-linear regimes combined. Overall, the growth rate exponent is determined as θ =0.292 ±0.009 , in good agreement with prior studies; however, the exponent is decaying slowly in time. Also, θ is shown to be relatively insensitive to the choice of mixing layer width measurements. The asymptotic integral molecular mixing measures Θ =0.792 ±0.014 , Ξ =0.800 ±0.014 , and Ψ =0.782 ±0.013 are lower than some experimental measurements but within the range of prior numerical studies. The flow field is shown to be persistently anisotropic for all algorithms, at the latest time having between 49% and 66% higher kinetic energy in the shock parallel direction compared to perpendicular and does not show any return to isotropy. The plane averaged volume fraction profiles at different time instants collapse reasonably well when scaled by the integral width, implying that the layer can be described by a single length scale and thus a single θ. Quantitative data given for both ensemble averages and individual algorithms provide useful benchmark results for future research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005JOptT..72..913K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005JOptT..72..913K"><span>Enhancing the quality of thermographic diagnosis in medicine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kuklitskaya, A. G.; Olefir, G. I.</p> <p>2005-12-01</p> <p>This paper discusses the possibilities of enhancing the quality of thermographic diagnosis in medicine by increasing the objectivity of the processes of recording, visualization, and interpretation of IR images (thermograms) of patients. A test program is proposed for the diagnosis of oncopathology of the mammary glands, involving standard conditions for recording thermograms, visualization of the IR image in several versions of the color palette and shades of grey, its interpretation in accordance with a rigorously specified algorithm that takes into account the temperature regime in the Zakharin-Head zone of the heart, and the drawing of a conclusion based on a statistical analysis of literature data and the results of a survey of more than 3000 patients of the Minsk City Clinical Oncological Dispensary.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20000057192','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20000057192"><span>Assessment of MCRM Boost Assist from Orbit for Deep Space Missions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>2000-01-01</p> <p>Report provides results of analysis for the beamed energy driven MHD Chemical Rocket Motor (MCRM) for application to boost from orbit to escape for deep space and interplanetary missions. Parametric analyses were performed in the mission to determine operating regime for which the MCRM provides significant propulsion performance enhancement. Analysis of the MHD accelerator was performed numerical computational methods to determine design and operational features necessary to achieve Isp on the order of 2,000 to 3,000 seconds. Algorithms were developed to scale weights for the accelerator and power supply. Significant improvement in propulsion system performance can be achieved with the beamed energy driven MCRM. The limiting factor on achievable vehicle acceleration is the specific power of the rectenna.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22551108-numerical-modelling-multimode-fibre-optic-communication-lines','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22551108-numerical-modelling-multimode-fibre-optic-communication-lines"><span>Numerical modelling of multimode fibre-optic communication lines</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Sidelnikov, O S; Fedoruk, M P; Sygletos, S</p> <p></p> <p>The results of numerical modelling of nonlinear propagation of an optical signal in multimode fibres with a small differential group delay are presented. It is found that the dependence of the error vector magnitude (EVM) on the differential group delay can be reduced by increasing the number of ADC samples per symbol in the numerical implementation of the differential group delay compensation algorithm in the receiver. The possibility of using multimode fibres with a small differential group delay for data transmission in modern digital communication systems is demonstrated. It is shown that with increasing number of modes the strong couplingmore » regime provides a lower EVM level than the weak coupling one. (fibre-optic communication lines)« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20030093751&hterms=Administration+Global&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DAdministration%2BGlobal','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20030093751&hterms=Administration+Global&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DAdministration%2BGlobal"><span>Validation and Error Characterization for the Global Precipitation Measurement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bidwell, Steven W.; Adams, W. J.; Everett, D. F.; Smith, E. A.; Yuter, S. E.</p> <p>2003-01-01</p> <p>The Global Precipitation Measurement (GPM) is an international effort to increase scientific knowledge on the global water cycle with specific goals of improving the understanding and the predictions of climate, weather, and hydrology. These goals will be achieved through several satellites specifically dedicated to GPM along with the integration of numerous meteorological satellite data streams from international and domestic partners. The GPM effort is led by the National Aeronautics and Space Administration (NASA) of the United States and the National Space Development Agency (NASDA) of Japan. In addition to the spaceborne assets, international and domestic partners will provide ground-based resources for validating the satellite observations and retrievals. This paper describes the validation effort of Global Precipitation Measurement to provide quantitative estimates on the errors of the GPM satellite retrievals. The GPM validation approach will build upon the research experience of the Tropical Rainfall Measuring Mission (TRMM) retrieval comparisons and its validation program. The GPM ground validation program will employ instrumentation, physical infrastructure, and research capabilities at Supersites located in important meteorological regimes of the globe. NASA will provide two Supersites, one in a tropical oceanic and the other in a mid-latitude continental regime. GPM international partners will provide Supersites for other important regimes. Those objectives or regimes not addressed by Supersites will be covered through focused field experiments. This paper describes the specific errors that GPM ground validation will address, quantify, and relate to the GPM satellite physical retrievals. GPM will attempt to identify the source of errors within retrievals including those of instrument calibration, retrieval physical assumptions, and algorithm applicability. With the identification of error sources, improvements will be made to the respective calibration, assumption, or algorithm. The instrumentation and techniques of the Supersites will be discussed. The GPM core satellite, with its dual-frequency radar and conically scanning radiometer, will provide insight into precipitation drop-size distributions and potentially increased measurement capabilities of light rain and snowfall. The ground validation program will include instrumentation and techniques commensurate with these new measurement capabilities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20030071078&hterms=break+even+analysis&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dbreak%2Beven%2Banalysis','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20030071078&hterms=break+even+analysis&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dbreak%2Beven%2Banalysis"><span>Archetypal TRMM Radar Profiles Identified Through Cluster Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Boccippio, Dennis J.</p> <p>2003-01-01</p> <p>It is widely held that identifiable 'convective regimes' exist in nature, although precise definitions of these are elusive. Examples include land / Ocean distinctions, break / monsoon beahvior, seasonal differences in the Amazon (SON vs DJF), etc. These regimes are often described by differences in the realized local convective spectra, and measured by various metrics of convective intensity, depth, areal coverage and rainfall amount. Objective regime identification may be valuable in several ways: regimes may serve as natural 'branch points' in satellite retrieval algorithms or data assimilation efforts; one example might be objective identification of regions that 'should' share a similar 2-R relationship. Similarly, objectively defined regimes may provide guidance on optimal siting of ground validation efforts. Objectively defined regimes could also serve as natural (rather than arbitrary geographic) domain 'controls' in studies of convective response to environmental forcing. Quantification of convective vertical structure has traditionally involved parametric study of prescribed quantities thought to be important to convective dynamics: maximum radar reflectivity, cloud top height, 30-35 dBZ echo top height, rain rate, etc. Individually, these parameters are somewhat deficient as their interpretation is often nonunique (the same metric value may signify different physics in different storm realizations). Individual metrics also fail to capture the coherence and interrelationships between vertical levels available in full 3-D radar datasets. An alternative approach is discovery of natural partitions of vertical structure in a globally representative dataset, or 'archetypal' reflectivity profiles. In this study, this is accomplished through cluster analysis of a very large sample (0[107) of TRMM-PR reflectivity columns. Once achieved, the rainconditional and unconditional 'mix' of archetypal profile types in a given location and/or season provides a description of the local convective spectrum which retains vertical structure information. A further cluster analysis of these 'mixes' can identify recurrent convective spectra. These are a first step towards objective identification of convective regimes, and towards answering the question: 'What are the most convectively similar locations in the world?'</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..1712528S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..1712528S"><span>Satellite remote sensing of aerosol and cloud properties over Eurasia</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sogacheva, Larisa; Kolmonen, Pekka; Saponaro, Giulia; Virtanen, Timo; Rodriguez, Edith; Sundström, Anu-Maija; Atlaskina, Ksenia; de Leeuw, Gerrit</p> <p>2015-04-01</p> <p>Satellite remote sensing provides the spatial distribution of aerosol and cloud properties over a wide area. In our studies large data sets are used for statistical studies on aerosol and cloud interaction in an area over Fennoscandia, the Baltic Sea and adjacent regions over the European mainland. This area spans several regimes with different influences on aerosol cloud interaction such as a the transition from relative clean air over Fennoscandia to more anthropogenically polluted air further south, and the influence maritime air over the Baltic and oceanic air advected from the North Atlantic. Anthropogenic pollution occurs in several parts of the study area, and in particular near densely populated areas and megacities, but also in industrialized areas and areas with dense traffic. The aerosol in such areas is quite different from that produced over the boreal forest and has different effects on air quality and climate. Studies have been made on the effects of aerosols on air quality and on the radiation balance in China. The aim of the study is to study the effect of these different regimes on aerosol-cloud interaction using a large aerosol and cloud data set retrieved with the (Advanced) Along Track Scanning Radiometer (A)ATSR Dual View algorithm (ADV) further developed at Finnish Meteorological Institute and aerosol and cloud data provided by MODIS. Retrieval algorithms for aerosol and clouds have been developed for the (A)ATSR, consisting of a series of instruments of which we use the second and third one: ATSR-2 which flew on the ERS-2 satellite (1995-2003) and AATSR which flew on the ENVISAT satellite (2002-2012) (both from the European Space Agency, ESA). The ADV algorithm provides aerosol data on a global scale with a default resolution of 10x10km2 (L2) and an aggregate product on 1x1 degree (L3). Optional, a 1x1 km2 retrieval products is available over smaller areas for specific studies. Since for the retrieval of AOD no prior knowledge is needed on surface properties, the surface reflectance can be independently retrieved using the AOD for atmospheric correction. For the retrieval of cloud properties, the SACURA algorithm has been implemented in the ADV/ASV aerosol retrieval suite. Cloud properties retrieved from AATSR data are cloud fraction, cloud optical thickness, cloud top height, cloud droplet effective radius, liquid water path. Aerosol and cloud properties are applied for different studies over the Eurasia area. Using the simultaneous retrieval of aerosol and cloud properties allows for study of the transition from the aerosol regime to the cloud regime, such as changes in effective radius or AOD (aerosol optical depth) to COT (cloud optical thickness). The column- integrated aerosol extinction, aerosol optical depth or AOD, which is primarily reported from satellite observations, can be used as a proxy for cloud condensation nuclei (CCN) and hence contains information on the ability of aerosol particles to form clouds. Hence, connecting this information with direct observations of cloud properties provides information on aerosol-cloud interactions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990008053&hterms=database+Column+Oriented&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Ddatabase%2BColumn%2BOriented','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990008053&hterms=database+Column+Oriented&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Ddatabase%2BColumn%2BOriented"><span>Particle Scattering in the Resonance Regime: Full-Wave Solution for Axisymmetric Particles with Large Aspect Ratios</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Zuffada, Cinzia; Crisp, David</p> <p>1997-01-01</p> <p>Reliable descriptions of the optical properties of clouds and aerosols are essential for studies of radiative transfer in planetary atmospheres. The scattering algorithms provide accurate estimates of these properties for spherical particles with a wide range of sizes and refractive indices, but these methods are not valid for non-spherical particles (e.g., ice crystals, mineral dust, and smoke). Even though a host of methods exist for deriving the optical properties of nonspherical particles that are very small or very large compared with the wavelength, only a few methods are valid in the resonance regime, where the particle dimensions are comparable with the wavelength. Most such methods are not ideal for particles with sharp edges or large axial ratios. We explore the utility of an integral equation approach for deriving the single-scattering optical properties of axisymmetric particles with large axial ratios. The accuracy of this technique is shown for spheres of increasing size parameters and an ensemble of randomly oriented prolate spheroids of size parameter equal to 10.079368. In this last case our results are compared with published results obtained with the T-matrix approach. Next we derive cross sections, single-scattering albedos, and phase functions for cylinders, disks, and spheroids of ice with dimensions extending from the Rayleigh to the geometric optics regime. Compared with those for a standard surface integral equation method, the storage requirement and the computer time needed by this method are reduced, thus making it attractive for generating databases to be used in multiple-scattering calculations. Our results show that water ice disks and cylinders are more strongly absorbing than equivalent volume spheres at most infrared wavelengths. The geometry of these particles also affects the angular dependence of the scattering. Disks and columns with maximum linear dimensions larger than the wavelength scatter much more radiation in the forward and backward directions and much less radiation at intermediate phase angles than equivalent volume spheres.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/950084','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/950084"><span>Enhanced Verification Test Suite for Physics Simulation Codes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kamm, J R; Brock, J S; Brandon, S T</p> <p>2008-10-10</p> <p>This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest.more » This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of greater sophistication or other physics regimes (e.g., energetic material response, magneto-hydrodynamics), would represent a scientifically desirable complement to the fundamental test cases discussed in this report. The authors believe that this document can be used to enhance the verification analyses undertaken at the DOE WP Laboratories and, thus, to improve the quality, credibility, and usefulness of the simulation codes that are analyzed with these problems.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.S31E..01R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.S31E..01R"><span>Bayesian tomography by interacting Markov chains</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Romary, T.</p> <p>2017-12-01</p> <p>In seismic tomography, we seek to determine the velocity of the undergound from noisy first arrival travel time observations. In most situations, this is an ill posed inverse problem that admits several unperfect solutions. Given an a priori distribution over the parameters of the velocity model, the Bayesian formulation allows to state this problem as a probabilistic one, with a solution under the form of a posterior distribution. The posterior distribution is generally high dimensional and may exhibit multimodality. Moreover, as it is known only up to a constant, the only sensible way to addressthis problem is to try to generate simulations from the posterior. The natural tools to perform these simulations are Monte Carlo Markov chains (MCMC). Classical implementations of MCMC algorithms generally suffer from slow mixing: the generated states are slow to enter the stationary regime, that is to fit the observations, and when one mode of the posterior is eventually identified, it may become difficult to visit others. Using a varying temperature parameter relaxing the constraint on the data may help to enter the stationary regime. Besides, the sequential nature of MCMC makes them ill fitted toparallel implementation. Running a large number of chains in parallel may be suboptimal as the information gathered by each chain is not mutualized. Parallel tempering (PT) can be seen as a first attempt to make parallel chains at different temperatures communicate but only exchange information between current states. In this talk, I will show that PT actually belongs to a general class of interacting Markov chains algorithm. I will also show that this class enables to design interacting schemes that can take advantage of the whole history of the chain, by authorizing exchanges toward already visited states. The algorithms will be illustrated with toy examples and an application to first arrival traveltime tomography.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009JPhCS.147a2047L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009JPhCS.147a2047L"><span>Online recognition of the multiphase flow regime and study of slug flow in pipeline</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liejin, Guo; Bofeng, Bai; Liang, Zhao; Xin, Wang; Hanyang, Gu</p> <p>2009-02-01</p> <p>Multiphase flow is the phenomenon existing widely in nature, daily life, as well as petroleum and chemical engineering industrial fields. The interface structure among multiphase and their movement are complicated, which distribute random and heterogeneously in the spatial and temporal scales and have multivalue of the flow structure and state[1]. Flow regime is defined as the macro feature about the multiphase interface structure and its distribution, which is an important feature to describe multiphase flow. The energy and mass transport mechanism differ much for each flow regimes. It is necessary to solve the flow regime recognition to get a clear understanding of the physical phenomena and their mechanism of multiphase flow. And the flow regime is one of the main factors affecting the online measurement accuracy of phase fraction, flow rate and other phase parameters. Therefore, it is of great scientific and technological importance to develop new principles and methods of multiphase flow regime online recognition, and of great industrial background. In this paper, the key reasons that the present method cannot be used to solve the industrial multiphase flow pattern recognition are clarified firstly. Then the prerequisite to realize the online recognition of multiphase flow regime is analyzed, and the recognition rules for partial flow pattern are obtained based on the massive experimental data. The standard templates for every flow regime feature are calculated with self-organization cluster algorithm. The multi-sensor data fusion method is proposed to realize the online recognition of multiphase flow regime with the pressure and differential pressure signals, which overcomes the severe influence of fluid flow velocity and the oil fraction on the recognition. The online recognition method is tested in the practice, which has less than 10 percent measurement error. The method takes advantages of high confidence, good fault tolerance and less requirement of single sensor performance. Among various flow patterns of gas-liquid flow, slug flow occurs frequently in the petroleum, chemical, civil and nuclear industries. In the offshore oil and gas field, the maximum slug length and its statistical distribution are very important for the design of separator and downstream processing facility at steady state operations. However transient conditions may be encountered in the production, such as operational upsets, start-up, shut-down, pigging and blowdown, which are key operational and safety issues related to oil field development. So it is necessary to have an understanding the flow parameters under transient conditions. In this paper, the evolution of slug length along a horizontal pipe in gas-liquid flow is also studied in details and then an experimental study of flowrate transients in slug flow is provided. Also, the special gas-liquid flow phenomena easily encountered in the life span of offshore oil fields, called severe slugging, is studied experimentally and some results are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3504065','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3504065"><span>Exploiting Genomic Knowledge in Optimising Molecular Breeding Programmes: Algorithms from Evolutionary Computing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>O'Hagan, Steve; Knowles, Joshua; Kell, Douglas B.</p> <p>2012-01-01</p> <p>Comparatively few studies have addressed directly the question of quantifying the benefits to be had from using molecular genetic markers in experimental breeding programmes (e.g. for improved crops and livestock), nor the question of which organisms should be mated with each other to best effect. We argue that this requires in silico modelling, an approach for which there is a large literature in the field of evolutionary computation (EC), but which has not really been applied in this way to experimental breeding programmes. EC seeks to optimise measurable outcomes (phenotypic fitnesses) by optimising in silico the mutation, recombination and selection regimes that are used. We review some of the approaches from EC, and compare experimentally, using a biologically relevant in silico landscape, some algorithms that have knowledge of where they are in the (genotypic) search space (G-algorithms) with some (albeit well-tuned ones) that do not (F-algorithms). For the present kinds of landscapes, F- and G-algorithms were broadly comparable in quality and effectiveness, although we recognise that the G-algorithms were not equipped with any ‘prior knowledge’ of epistatic pathway interactions. This use of algorithms based on machine learning has important implications for the optimisation of experimental breeding programmes in the post-genomic era when we shall potentially have access to the full genome sequence of every organism in a breeding population. The non-proprietary code that we have used is made freely available (via Supplementary information). PMID:23185279</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5873906','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5873906"><span>Multivariate Analysis of the Cotton Seed Ionome Reveals a Shared Genetic Architecture</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Pauli, Duke; Ziegler, Greg; Ren, Min; Jenks, Matthew A.; Hunsaker, Douglas J.; Zhang, Min; Baxter, Ivan; Gore, Michael A.</p> <p>2018-01-01</p> <p>To mitigate the effects of heat and drought stress, a better understanding of the genetic control of physiological responses to these environmental conditions is needed. To this end, we evaluated an upland cotton (Gossypium hirsutum L.) mapping population under water-limited and well-watered conditions in a hot, arid environment. The elemental concentrations (ionome) of seed samples from the population were profiled in addition to those of soil samples taken from throughout the field site to better model environmental variation. The elements profiled in seeds exhibited moderate to high heritabilities, as well as strong phenotypic and genotypic correlations between elements that were not altered by the imposed irrigation regimes. Quantitative trait loci (QTL) mapping results from a Bayesian classification method identified multiple genomic regions where QTL for individual elements colocalized, suggesting that genetic control of the ionome is highly interrelated. To more fully explore this genetic architecture, multivariate QTL mapping was implemented among groups of biochemically related elements. This analysis revealed both additional and pleiotropic QTL responsible for coordinated control of phenotypic variation for elemental accumulation. Machine learning algorithms that utilized only ionomic data predicted the irrigation regime under which genotypes were evaluated with very high accuracy. Taken together, these results demonstrate the extent to which the seed ionome is genetically interrelated and predictive of plant physiological responses to adverse environmental conditions. PMID:29437829</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMGC53D0928D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMGC53D0928D"><span>Thermal regime of an ice-wedge polygon landscape near Barrow, Alaska</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Daanen, R. P.; Liljedahl, A. K.</p> <p>2017-12-01</p> <p>Tundra landscapes are changing all over the circumpolar Arctic due to permafrost degradation. Soil cracking and infilling of meltwater repeated over thousands of years form ice wedges, which produce the characteristic surface pattern of ice-wedge polygon tundra. Rapid top-down thawing of massive ice leads to differential ground subsidence and sets in motion a series of short- and long-term hydrological and ecological changes. Subsequent responses in the soil thermal regime drive further permafrost degradation and/or stabilization. Here we explore the soil thermal regime of an ice-wedge polygon terrain near Utqiagvik (formerly Barrow) with the Water balance Simulation Model (WaSiM). WaSiM is a hydro-thermal model developed to simulate the water balance at the watershed scale and was recently refined to represent the hydrological processes unique to cold climates. WaSiM includes modules that represent surface runoff, evapotranspiration, groundwater, and soil moisture, while active layer freezing and thawing is based on a direct coupling of hydrological and thermal processes. A new snow module expands the vadose zone calculations into the snow pack, allowing the model to simulate the snow as a porous medium similar to soil. Together with a snow redistribution algorithm based on local topography, this latest addition to WaSiM makes simulation of the ground thermal regime much more accurate during winter months. Effective representation of ground temperatures during winter is crucial in the simulation of the permafrost thermal regime and allows for refined predictions of future ice-wedge degradation or stabilization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H33C1688P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H33C1688P"><span>Hydrologic classification of rivers based on cluster analysis of dimensionless hydrologic signatures: Applications for environmental instream flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Praskievicz, S. J.; Luo, C.</p> <p>2017-12-01</p> <p>Classification of rivers is useful for a variety of purposes, such as generating and testing hypotheses about watershed controls on hydrology, predicting hydrologic variables for ungaged rivers, and setting goals for river management. In this research, we present a bottom-up (based on machine learning) river classification designed to investigate the underlying physical processes governing rivers' hydrologic regimes. The classification was developed for the entire state of Alabama, based on 248 United States Geological Survey (USGS) stream gages that met criteria for length and completeness of records. Five dimensionless hydrologic signatures were derived for each gage: slope of the flow duration curve (indicator of flow variability), baseflow index (ratio of baseflow to average streamflow), rising limb density (number of rising limbs per unit time), runoff ratio (ratio of long-term average streamflow to long-term average precipitation), and streamflow elasticity (sensitivity of streamflow to precipitation). We used a Bayesian clustering algorithm to classify the gages, based on the five hydrologic signatures, into distinct hydrologic regimes. We then used classification and regression trees (CART) to predict each gaged river's membership in different hydrologic regimes based on climatic and watershed variables. Using existing geospatial data, we applied the CART analysis to classify ungaged streams in Alabama, with the National Hydrography Dataset Plus (NHDPlus) catchment (average area 3 km2) as the unit of classification. The results of the classification can be used for meeting management and conservation objectives in Alabama, such as developing statewide standards for environmental instream flows. Such hydrologic classification approaches are promising for contributing to process-based understanding of river systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29513352','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29513352"><span>Cherry-picking functionally relevant substates from long md trajectories using a stratified sampling approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chandramouli, Balasubramanian; Mancini, Giordano</p> <p>2016-01-01</p> <p>Classical Molecular Dynamics (MD) simulations can provide insights at the nanoscopic scale into protein dynamics. Currently, simulations of large proteins and complexes can be routinely carried out in the ns-μs time regime. Clustering of MD trajectories is often performed to identify selective conformations and to compare simulation and experimental data coming from different sources on closely related systems. However, clustering techniques are usually applied without a careful validation of results and benchmark studies involving the application of different algorithms to MD data often deal with relatively small peptides instead of average or large proteins; finally clustering is often applied as a means to analyze refined data and also as a way to simplify further analysis of trajectories. Herein, we propose a strategy to classify MD data while carefully benchmarking the performance of clustering algorithms and internal validation criteria for such methods. We demonstrate the method on two showcase systems with different features, and compare the classification of trajectories in real and PCA space. We posit that the prototype procedure adopted here could be highly fruitful in clustering large trajectories of multiple systems or that resulting especially from enhanced sampling techniques like replica exchange simulations. Copyright: © 2016 by Fabrizio Serra editore, Pisa · Roma.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005SPIE.5979..114P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005SPIE.5979..114P"><span>Retrieval of atmospheric properties from hyper and multispectral imagery with the FLAASH atmospheric correction algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael; Berk, Alexander; Anderson, Gail; Gardner, James; Felde, Gerald</p> <p>2005-10-01</p> <p>Atmospheric Correction Algorithms (ACAs) are used in applications of remotely sensed Hyperspectral and Multispectral Imagery (HSI/MSI) to correct for atmospheric effects on measurements acquired by air and space-borne systems. The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm is a forward-model based ACA created for HSI and MSI instruments which operate in the visible through shortwave infrared (Vis-SWIR) spectral regime. Designed as a general-purpose, physics-based code for inverting at-sensor radiance measurements into surface reflectance, FLAASH provides a collection of spectral analysis and atmospheric retrieval methods including: a per-pixel vertical water vapor column estimate, determination of aerosol optical depth, estimation of scattering for compensation of adjacency effects, detection/characterization of clouds, and smoothing of spectral structure resulting from an imperfect atmospheric correction. To further improve the accuracy of the atmospheric correction process, FLAASH will also detect and compensate for sensor-introduced artifacts such as optical smile and wavelength mis-calibration. FLAASH relies on the MODTRANTM radiative transfer (RT) code as the physical basis behind its mathematical formulation, and has been developed in parallel with upgrades to MODTRAN in order to take advantage of the latest improvements in speed and accuracy. For example, the rapid, high fidelity multiple scattering (MS) option available in MODTRAN4 can greatly improve the accuracy of atmospheric retrievals over the 2-stream approximation. In this paper, advanced features available in FLAASH are described, including the principles and methods used to derive atmospheric parameters from HSI and MSI data. Results are presented from processing of Hyperion, AVIRIS, and LANDSAT data.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhDT.......397C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhDT.......397C"><span>Quantum evolution: The case of weak localization for a 3D alloy-type Anderson model and application to Hamiltonian based quantum computation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cao, Zhenwei</p> <p></p> <p>Over the years, people have found Quantum Mechanics to be extremely useful in explaining various physical phenomena from a microscopic point of view. Anderson localization, named after physicist P. W. Anderson, states that disorder in a crystal can cause non-spreading of wave packets, which is one possible mechanism (at single electron level) to explain metal-insulator transitions. The theory of quantum computation promises to bring greater computational power over classical computers by making use of some special features of Quantum Mechanics. The first part of this dissertation considers a 3D alloy-type model, where the Hamiltonian is the sum of the finite difference Laplacian corresponding to free motion of an electron and a random potential generated by a sign-indefinite single-site potential. The result shows that localization occurs in the weak disorder regime, i.e., when the coupling parameter lambda is very small, for energies E ≤ --Clambda 2. The second part of this dissertation considers adiabatic quantum computing (AQC) algorithms for the unstructured search problem to the case when the number of marked items is unknown. In an ideal situation, an explicit quantum algorithm together with a counting subroutine are given that achieve the optimal Grover speedup over classical algorithms, i.e., roughly speaking, reduce O(2n) to O(2n/2), where n is the size of the problem. However, if one considers more realistic settings, the result shows this quantum speedup is achievable only under a very rigid control precision requirement (e.g., exponentially small control error).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ClDy..tmp.2350S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ClDy..tmp.2350S"><span>Selecting climate change scenarios for regional hydrologic impact studies based on climate extremes indices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Seo, Seung Beom; Kim, Young-Oh; Kim, Youngil; Eum, Hyung-Il</p> <p>2018-04-01</p> <p>When selecting a subset of climate change scenarios (GCM models), the priority is to ensure that the subset reflects the comprehensive range of possible model results for all variables concerned. Though many studies have attempted to improve the scenario selection, there is a lack of studies that discuss methods to ensure that the results from a subset of climate models contain the same range of uncertainty in hydrologic variables as when all models are considered. We applied the Katsavounidis-Kuo-Zhang (KKZ) algorithm to select a subset of climate change scenarios and demonstrated its ability to reduce the number of GCM models in an ensemble, while the ranges of multiple climate extremes indices were preserved. First, we analyzed the role of 27 ETCCDI climate extremes indices for scenario selection and selected the representative climate extreme indices. Before the selection of a subset, we excluded a few deficient GCM models that could not represent the observed climate regime. Subsequently, we discovered that a subset of GCM models selected by the KKZ algorithm with the representative climate extreme indices could not capture the full potential range of changes in hydrologic extremes (e.g., 3-day peak flow and 7-day low flow) in some regional case studies. However, the application of the KKZ algorithm with a different set of climate indices, which are correlated to the hydrologic extremes, enabled the overcoming of this limitation. Key climate indices, dependent on the hydrologic extremes to be projected, must therefore be determined prior to the selection of a subset of GCM models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70179138','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70179138"><span>Mid Columbia sturgeon incubation and rearing study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Parsley, Michael J.; Kofoot, Eric; Blubaugh, J</p> <p>2011-01-01</p> <p>This report describes the results from the second year of a three-year investigation on the effects of different thermal regimes on incubation and rearing early life stages of white sturgeon Acipenser transmontanus. The Columbia River has been significantly altered by the construction of dams resulting in annual flows and water temperatures that differ from historical levels. White sturgeon have been demonstrated to spawn in two very distinct sections of the Columbia River in British Columbia, Canada, which are both located immediately downstream of hydropower facilities. The thermal regimes differ substantially between these two areas. The general approach of this study was to incubate and rear white sturgeon early life stages under two thermal regimes; one mimicking the current, cool water regime of the Columbia River downstream from Revelstoke Dam, and one mimicking a warmer regime similar to conditions found on the Columbia River at the international border. Second-year results suggest that thermal regimes during incubation influence rate of egg development and size at hatch. Eggs incubated under the warm thermal regime hatched sooner than those incubated under the cool thermal regime. Mean length of free embryos at hatch was significantly different between thermal regimes with free embryos from the warm thermal regime being longer at hatch. However, free embryos from the cool thermal regime had a significantly higher mean weight at hatch. This is in contrast with results obtained during 2009. The rearing trials revealed that growth of fish reared in the cool thermal regime was substantially less than growth of fish reared in the warm thermal regime. The magnitude of mortality was greatest in the warm thermal regime prior to initiation of exogenous feeding, but chronic low levels of mortality in the cool thermal regime were higher throughout the period. The starvation trials showed that the fish in the warm thermal regime exhausted their yolk reserves faster than fish in the cool thermal regime.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvB..97k5161J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvB..97k5161J"><span>Infinite projected entangled-pair state algorithm for ruby and triangle-honeycomb lattices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jahromi, Saeed S.; Orús, Román; Kargarian, Mehdi; Langari, Abdollah</p> <p>2018-03-01</p> <p>The infinite projected entangled-pair state (iPEPS) algorithm is one of the most efficient techniques for studying the ground-state properties of two-dimensional quantum lattice Hamiltonians in the thermodynamic limit. Here, we show how the algorithm can be adapted to explore nearest-neighbor local Hamiltonians on the ruby and triangle-honeycomb lattices, using the corner transfer matrix (CTM) renormalization group for 2D tensor network contraction. Additionally, we show how the CTM method can be used to calculate the ground-state fidelity per lattice site and the boundary density operator and entanglement entropy (EE) on an infinite cylinder. As a benchmark, we apply the iPEPS method to the ruby model with anisotropic interactions and explore the ground-state properties of the system. We further extract the phase diagram of the model in different regimes of the couplings by measuring two-point correlators, ground-state fidelity, and EE on an infinite cylinder. Our phase diagram is in agreement with previous studies of the model by exact diagonalization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3851580','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3851580"><span>Improving single molecule force spectroscopy through automated real-time data collection and quantification of experimental conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Scholl, Zackary N.; Marszalek, Piotr E.</p> <p>2013-01-01</p> <p>The benefits of single molecule force spectroscopy (SMFS) clearly outweigh the challenges which include small sample sizes, tedious data collection and introduction of human bias during the subjective data selection. These difficulties can be partially eliminated through automation of the experimental data collection process for atomic force microscopy (AFM). Automation can be accomplished using an algorithm that triages usable force-extension recordings quickly with positive and negative selection. We implemented an algorithm based on the windowed fast Fourier transform of force-extension traces that identifies peaks using force-extension regimes to correctly identify usable recordings from proteins composed of repeated domains. This algorithm excels as a real-time diagnostic because it involves <30 ms computational time, has high sensitivity and specificity, and efficiently detects weak unfolding events. We used the statistics provided by the automated procedure to clearly demonstrate the properties of molecular adhesion and how these properties change with differences in the cantilever tip and protein functional groups and protein age. PMID:24001740</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001JFM...440..147K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001JFM...440..147K"><span>Inertial effects in three-dimensional spinodal decomposition of a symmetric binary fluid mixture: a lattice Boltzmann study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kendon, Vivien M.; Cates, Michael E.; Pagonabarraga, Ignacio; Desplat, J.-C.; Bladon, Peter</p> <p>2001-08-01</p> <p>The late-stage demixing following spinodal decomposition of a three-dimensional symmetric binary fluid mixture is studied numerically, using a thermodynamically consistent lattice Boltzmann method. We combine results from simulations with different numerical parameters to obtain an unprecedented range of length and time scales when expressed in reduced physical units. (These are the length and time units derived from fluid density, viscosity, and interfacial tension.) Using eight large (2563) runs, the resulting composite graph of reduced domain size l against reduced time t covers 1 [less, similar] l [less, similar] 105, 10 [less, similar] t [less, similar] 108. Our data are consistent with the dynamical scaling hypothesis that l(t) is a universal scaling curve. We give the first detailed statistical analysis of fluid motion, rather than just domain evolution, in simulations of this kind, and introduce scaling plots for several quantities derived from the fluid velocity and velocity gradient fields. Using the conventional definition of Reynolds number for this problem, Re[phi] = ldl/dt, we attain values approaching 350. At Re[phi] [greater, similar] 100 (which requires t [greater, similar] 106) we find clear evidence of Furukawa's inertial scaling (l [similar] t2/3), although the crossover from the viscous regime (l [similar] t) is both broad and late (102 [less, similar] t [less, similar] 106). Though it cannot be ruled out, we find no indication that Re[phi] is self-limiting (l [similar] t1/2) at late times, as recently proposed by Grant & Elder. Detailed study of the velocity fields confirms that, for our most inertial runs, the RMS ratio of nonlinear to viscous terms in the Navier Stokes equation, R2, is of order 10, with the fluid mixture showing incipient turbulent characteristics. However, we cannot go far enough into the inertial regime to obtain a clear length separation of domain size, Taylor microscale, and Kolmogorov scale, as would be needed to test a recent ‘extended’ scaling theory of Kendon (in which R2 is self-limiting but Re[phi] not). Obtaining our results has required careful steering of several numerical control parameters so as to maintain adequate algorithmic stability, efficiency and isotropy, while eliminating unwanted residual diffusion. (We argue that the latter affects some studies in the literature which report l [similar] t2/3 for t [less, similar] 104.) We analyse the various sources of error and find them just within acceptable levels (a few percent each) in most of our datasets. To bring these under significantly better control, or to go much further into the inertial regime, would require much larger computational resources and/or a breakthrough in algorithm design.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhDT.......127J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhDT.......127J"><span>Time Series Reconstruction of Surface Flow Velocity on Marine-terminating Outlet Glaciers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jeong, Seongsu</p> <p></p> <p>The flow velocity of glacier and its fluctuation are valuable data to study the contribution of sea level rise of ice sheet by understanding its dynamic structure. Repeat-image feature tracking (RIFT) is a platform-independent, feature tracking-based velocity measurement methodology effective for building a time series of velocity maps from optical images. However, limited availability of perfectly-conditioned images motivated to improve robustness of the algorithm. With this background, we developed an improved RIFT algorithm based on multiple-image multiple-chip algorithm presented in Ahn and Howat (2011). The test results affirm improvement in the new RIFT algorithm in avoiding outlier, and the analysis of the multiple matching results clarified that each individual matching results worked in complementary manner to deduce the correct displacements. LANDSAT 8 is a new satellite in LANDSAT program that has begun its operation since 2013. The improved radiometric performance of OLI aboard the satellite is expected to enable better velocity mapping results than ETM+ aboard LANDSAT 7. However, it was not yet well studied that in what cases the new will sensor will be beneficial, and how much the improvement will be obtained. We carried out a simulation-based comparison between ETM+ and OLI and confirmed OLI outperforms ETM+ especially in low contrast conditions, especially in polar night, translucent cloud covers, and bright upglacier with less texture. We have identified a rift on ice shelf of Pine island glacier located in western Antarctic ice sheet. Unlike the previous events, the evolution of the current started from the center of the ice shelf. In order to analyze this unique event, we utilized the improved RIFT algorithm to its OLI images to retrieve time series of velocity maps. We discovered from the analyses that the part of ice shelf below the rift is changing its speed, and shifting of splashing crevasses on shear margin is migrating to the center of the shelf. Concerning the concurrent disintegration of ice melange on its western part of the terminus, we postulate that change in flow regime attributes to loss of resistance force exerted by the melange. There are several topics that need to be addressed for further improve the RIFT algorithm. As coregistration error is significant contributor to the velocity measurement, a method to mitigate that error needs to be devised. Also, considering that the domain of RIFT product spans not only in space but also in time, its regridding and gap filling work will benefit from extending its domain to both space and time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19830020178','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19830020178"><span>Film thickness for different regimes of fluid-film lubrication. [elliptical contacts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hamrock, B. J.; Dowson, D.</p> <p>1983-01-01</p> <p>Mathematical formulas are presented which express the dimensionless minimum film thickness for the four lubrication regimes found in elliptical contacts: isoviscous-rigid regime; piezoviscous-rigid regime; isoviscous-elastic regime; and piezoviscous-elastic regime. The relative importance of pressure on elastic distortion and lubricant viscosity is the factor that distinguishes these regimes for a given conjunction geometry. In addition, these equations were used to develop maps of the lubrication regimes by plotting film thickness contours on a log-log grid of the dimensionless viscosity and elasticity parameters for three values of the ellipticity parameter. These results present a complete theoretical film thickness parameter solution for elliptical constants in the four lubrication regimes. The results are particularly useful in initial investigations of many practical lubrication problems involving elliptical conjunctions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27983618','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27983618"><span>Efficient Terahertz Wide-Angle NUFFT-Based Inverse Synthetic Aperture Imaging Considering Spherical Wavefront.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gao, Jingkun; Deng, Bin; Qin, Yuliang; Wang, Hongqiang; Li, Xiang</p> <p>2016-12-14</p> <p>An efficient wide-angle inverse synthetic aperture imaging method considering the spherical wavefront effects and suitable for the terahertz band is presented. Firstly, the echo signal model under spherical wave assumption is established, and the detailed wavefront curvature compensation method accelerated by 1D fast Fourier transform (FFT) is discussed. Then, to speed up the reconstruction procedure, the fast Gaussian gridding (FGG)-based nonuniform FFT (NUFFT) is employed to focus the image. Finally, proof-of-principle experiments are carried out and the results are compared with the ones obtained by the convolution back-projection (CBP) algorithm. The results demonstrate the effectiveness and the efficiency of the presented method. This imaging method can be directly used in the field of nondestructive detection and can also be used to provide a solution for the calculation of the far-field RCSs (Radar Cross Section) of targets in the terahertz regime.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26561119','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26561119"><span>Interferogram conditioning for improved Fourier analysis and application to X-ray phase imaging by grating interferometry.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Montaux-Lambert, Antoine; Mercère, Pascal; Primot, Jérôme</p> <p>2015-11-02</p> <p>An interferogram conditioning procedure, for subsequent phase retrieval by Fourier demodulation, is presented here as a fast iterative approach aiming at fulfilling the classical boundary conditions imposed by Fourier transform techniques. Interference fringe patterns with typical edge discontinuities were simulated in order to reveal the edge artifacts that classically appear in traditional Fourier analysis, and were consecutively used to demonstrate the correction efficiency of the proposed conditioning technique. Optimization of the algorithm parameters is also presented and discussed. Finally, the procedure was applied to grating-based interferometric measurements performed in the hard X-ray regime. The proposed algorithm enables nearly edge-artifact-free retrieval of the phase derivatives. A similar enhancement of the retrieved absorption and fringe visibility images is also achieved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvD..97c6022G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvD..97c6022G"><span>All-optical signatures of strong-field QED in the vacuum emission picture</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gies, Holger; Karbstein, Felix; Kohlfürst, Christian</p> <p>2018-02-01</p> <p>We study all-optical signatures of the effective nonlinear couplings among electromagnetic fields in the quantum vacuum, using the collision of two focused high-intensity laser pulses as an example. The experimental signatures of quantum vacuum nonlinearities are encoded in signal photons, whose kinematic and polarization properties differ from the photons constituting the macroscopic laser fields. We implement an efficient numerical algorithm allowing for the theoretical investigation of such signatures in realistic field configurations accessible in experiment. This algorithm is based on a vacuum emission scheme and can readily be adapted to the collision of more laser beams or further involved field configurations. We solve the case of two colliding pulses in full 3 +1 -dimensional spacetime and identify experimental geometries and parameter regimes with improved signal-to-noise ratios.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvD..96j3512B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvD..96j3512B"><span>Fast optimization algorithms and the cosmological constant</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bao, Ning; Bousso, Raphael; Jordan, Stephen; Lackey, Brad</p> <p>2017-11-01</p> <p>Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of a problem that is hard for the complexity class NP (Nondeterministic Polynomial-time). The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable Universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10-120 in a randomly generated 1 09-dimensional Arkani-Hamed-Dimopoulos-Kachru landscape.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EL....11450003M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EL....11450003M"><span>Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido</p> <p>2016-06-01</p> <p>We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20140002064&hterms=physics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dphysics','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20140002064&hterms=physics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dphysics"><span>The Physics of Imaging with Remote Sensors : Photon State Space & Radiative Transfer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Davis, Anthony B.</p> <p>2012-01-01</p> <p>Standard (mono-pixel/steady-source) retrieval methodology is reaching its fundamental limit with access to multi-angle/multi-spectral photo- polarimetry. Next... Two emerging new classes of retrieval algorithm worth nurturing: multi-pixel time-domain Wave-radiometry transition regimes, and more... Cross-fertilization with bio-medical imaging. Physics-based remote sensing: - What is "photon state space?" - What is "radiative transfer?" - Is "the end" in sight? Two wide-open frontiers! center dot Examples (with variations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890019295','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890019295"><span>Numerical studies of convective heat transfer in an inclined semiannular enclosure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wang, Lin-Wen; Yung, Chain-Nan; Chai, An-Ti; Rashidnia, Nasser</p> <p>1989-01-01</p> <p>Natural convection heat transfer in a two-dimensional differentially heated semiannular enclosure is studied. The enclosure is isothermally heated and cooled at the inner and outer walls, respectively. A commercial software based on the SIMPLER algorithm was used to simulate the velocity and temperature profiles. Various parameters that affect the momentum and heat transfer processes were examined. These parameters include the Rayleigh number, Prandtl number, radius ratio, and the angle of inclination. A flow regime extending from conduction-dominated to convection-dominated flow was examined. The computed results of heat transfer are presented as a function of flow parameter and geometric factors. It is found that the heat transfer rate attains a minimum when the enclosure is tilted about +50 deg with respect to the gravitational direction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/973383','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/973383"><span>Simulation Studies of the X-Ray Free-Electron Laser Oscillator</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Lindberg, R. R.; Shyd'ko, Y.; Kim, K.-J</p> <p></p> <p>Simulations of the x-ray free-electron laser (FEL) oscillator are presented that include transverse effects and realistic Bragg crystal properties with the two-dimensional code GINGER. In the present cases considered the radiation divergence is much narrower than the crystal acceptance, and the numerical algorithm can be simplified by ignoring the finite angular bandwidth of the crystal. In this regime GINGER shows that the saturated x-ray pulses have 109 photons and are nearly Fourier-limited with peak powers in excess of 1 MW. Wealso include preliminary results for a four-mirror cavity that can be tuned in wavelength over a few percent, with futuremore » plans to incorporate the full transverse response of the Bragg crystals into GINGER to more accurately model this tunable source.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.7400B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.7400B"><span>Comparison of fault-related folding algorithms to restore a fold-and-thrust-belt</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Brandes, Christian; Tanner, David</p> <p>2017-04-01</p> <p>Fault-related folding means the contemporaneous evolution of folds as a consequence of fault movement. It is a common deformation process in the upper crust that occurs worldwide in accretionary wedges, fold-and-thrust belts, and intra-plate settings, in either strike-slip, compressional, or extensional regimes. Over the last 30 years different algorithms have been developed to simulate the kinematic evolution of fault-related folds. All these models of fault-related folding include similar simplifications and limitations and use the same kinematic behaviour throughout the model (Brandes & Tanner, 2014). We used a natural example of fault-related folding from the Limón fold-and-thrust belt in eastern Costa Rica to test two different algorithms and to compare the resulting geometries. A thrust fault and its hanging-wall anticline were restored using both the trishear method (Allmendinger, 1998; Zehnder & Allmendinger, 2000) and the fault-parallel flow approach (Ziesch et al. 2014); both methods are widely used in academia and industry. The resulting hanging-wall folds above the thrust fault are restored in substantially different fashions. This is largely a function of the propagation-to-slip ratio of the thrust, which controls the geometry of the related anticline. Understanding the controlling factors for anticline evolution is important for the evaluation of potential hydrocarbon reservoirs and the characterization of fault processes. References: Allmendinger, R.W., 1998. Inverse and forward numerical modeling of trishear fault propagation folds. Tectonics, 17, 640-656. Brandes, C., Tanner, D.C. 2014. Fault-related folding: a review of kinematic models and their application. Earth Science Reviews, 138, 352-370. Zehnder, A.T., Allmendinger, R.W., 2000. Velocity field for the trishear model. Journal of Structural Geology, 22, 1009-1014. Ziesch, J., Tanner, D.C., Krawczyk, C.M. 2014. Strain associated with the fault-parallel flow algorithm during kinematic fault displacement. Mathematical Geosciences, 46(1), 59-73.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20170002567','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20170002567"><span>OLYMPEX Data Workshop: GPM View</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Petersen, W.</p> <p>2017-01-01</p> <p>OLYMPEX Primary Objectives: Datasets to enable: (1) Direct validation over complex terrain at multiple scales, liquid and frozen precip types, (a) Do we capture terrain and synoptic regime transitions, orographic enhancements/structure, full range of precipitation intensity (e.g., very light to heavy) and types, spatial variability? (b) How well can we estimate space/time-accumulated precipitation over terrain (liquid + frozen)? (2) Physical validation of algorithms in mid-latitude cold season frontal systems over ocean and complex terrain, (a) What are the column properties of frozen, melting, liquid hydrometeors-their relative contributions to estimated surface precipitation, transition under the influence of terrain gradients, and systematic variability as a function of synoptic regime? (3) Integrated hydrologic validation in complex terrain, (a) Can satellite estimates be combined with modeling over complex topography to drive improved products (assimilation, downscaling) [Level IV products] (b) What are capabilities and limitations for use of satellite-based precipitation estimates in stream/river flow forecasting?</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1779913','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1779913"><span>Spreading of Neutrophils: From Activation to Migration</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sengupta, Kheya; Aranda-Espinoza, Helim; Smith, Lee; Janmey, Paul; Hammer, Daniel</p> <p>2006-01-01</p> <p>Neutrophils rely on rapid changes in morphology to ward off invaders. Time-resolved dynamics of spreading human neutrophils after activation by the chemoattractant fMLF (formyl methionyl leucyl phenylalanine) was observed by RICM (reflection interference contrast microscopy). An image-processing algorithm was developed to identify the changes in the overall cell shape and the zones of close contact with the substrate. We show that in the case of neutrophils, cell spreading immediately after exposure of fMLF is anisotropic and directional. The dependence of spreading area, A, of the cell as a function of time, t, shows several distinct regimes, each of which can be fitted as power laws (A ∼ tb). The different spreading regimes correspond to distinct values of the exponent b and are related to the adhesion state of the cell. Treatment with cytochalasin-B eliminated the anisotropy in the spreading. PMID:17012330</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1367663-robust-spectral-unmixing-sparse-multispectral-lidar-waveforms-using-gamma-markov-random-fields','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1367663-robust-spectral-unmixing-sparse-multispectral-lidar-waveforms-using-gamma-markov-random-fields"><span>Robust Spectral Unmixing of Sparse Multispectral Lidar Waveforms using Gamma Markov Random Fields</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Altmann, Yoann; Maccarone, Aurora; McCarthy, Aongus; ...</p> <p>2017-05-10</p> <p>Here, this paper presents a new Bayesian spectral un-mixing algorithm to analyse remote scenes sensed via sparse multispectral Lidar measurements. To a first approximation, in the presence of a target, each Lidar waveform consists of a main peak, whose position depends on the target distance and whose amplitude depends on the wavelength of the laser source considered (i.e, on the target reflectivity). Besides, these temporal responses are usually assumed to be corrupted by Poisson noise in the low photon count regime. When considering multiple wavelengths, it becomes possible to use spectral information in order to identify and quantify the mainmore » materials in the scene, in addition to estimation of the Lidar-based range profiles. Due to its anomaly detection capability, the proposed hierarchical Bayesian model, coupled with an efficient Markov chain Monte Carlo algorithm, allows robust estimation of depth images together with abundance and outlier maps associated with the observed 3D scene. The proposed methodology is illustrated via experiments conducted with real multispectral Lidar data acquired in a controlled environment. The results demonstrate the possibility to unmix spectral responses constructed from extremely sparse photon counts (less than 10 photons per pixel and band).« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920003339','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920003339"><span>Energy Models for One-Carrier Transport in Semiconductor Devices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jerome, Joseph W.; Shu, Chi-Wang</p> <p>1991-01-01</p> <p>Moment models of carrier transport, derived from the Boltzmann equation, made possible the simulation of certain key effects through such realistic assumptions as energy dependent mobility functions. This type of global dependence permits the observation of velocity overshoot in the vicinity of device junctions, not discerned via classical drift-diffusion models, which are primarily local in nature. It was found that a critical role is played in the hydrodynamic model by the heat conduction term. When ignored, the overshoot is inappropriately damped. When the standard choice of the Wiedemann-Franz law is made for the conductivity, spurious overshoot is observed. Agreement with Monte-Carlo simulation in this regime required empirical modification of this law, or nonstandard choices. Simulations of the hydrodynamic model in one and two dimensions, as well as simulations of a newly developed energy model, the RT model, are presented. The RT model, intermediate between the hydrodynamic and drift-diffusion model, was developed to eliminate the parabolic energy band and Maxwellian distribution assumptions, and to reduce the spurious overshoot with physically consistent assumptions. The algorithms employed for both models are the essentially non-oscillatory shock capturing algorithms. Some mathematical results are presented and contrasted with the highly developed state of the drift-diffusion model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150000341','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150000341"><span>Assessment of 10 Year Record of Aerosol Optical Depth from OMI UV Observations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ahn, Changwoo; Torres, Omar; Jethva, Hiren</p> <p>2014-01-01</p> <p>The Ozone Monitoring Instrument (OMI) onboard the EOS-Aura satellite provides information on aerosol optical properties by making use of the large sensitivity to aerosol absorption in the near-ultraviolet (UV) spectral region. Another important advantage of using near UV observations for aerosol characterization is the low surface albedo of all terrestrial surfaces in this spectral region that reduces retrieval errors associated with land surface reflectance characterization. In spite of the 13 × 24 square kilometers coarse sensor footprint, the OMI near UV aerosol algorithm (OMAERUV) retrieves aerosol optical depth (AOD) and single-scattering albedo under cloud-free conditions from radiance measurements at 354 and 388 nanometers. We present validation results of OMI AOD against space and time collocated Aerosol Robotic Network measured AOD values over multiple stations representing major aerosol episodes and regimes. OMAERUV's performance is also evaluated with respect to those of the Aqua-MODIS Deep Blue and Terra-MISR AOD algorithms over arid and semi-arid regions in Northern Africa. The outcome of the evaluation analysis indicates that in spite of the "row anomaly" problem, affecting the sensor since mid-2007, the long-term aerosol record shows remarkable sensor stability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1367663','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1367663"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Altmann, Yoann; Maccarone, Aurora; McCarthy, Aongus</p> <p></p> <p>Here, this paper presents a new Bayesian spectral un-mixing algorithm to analyse remote scenes sensed via sparse multispectral Lidar measurements. To a first approximation, in the presence of a target, each Lidar waveform consists of a main peak, whose position depends on the target distance and whose amplitude depends on the wavelength of the laser source considered (i.e, on the target reflectivity). Besides, these temporal responses are usually assumed to be corrupted by Poisson noise in the low photon count regime. When considering multiple wavelengths, it becomes possible to use spectral information in order to identify and quantify the mainmore » materials in the scene, in addition to estimation of the Lidar-based range profiles. Due to its anomaly detection capability, the proposed hierarchical Bayesian model, coupled with an efficient Markov chain Monte Carlo algorithm, allows robust estimation of depth images together with abundance and outlier maps associated with the observed 3D scene. The proposed methodology is illustrated via experiments conducted with real multispectral Lidar data acquired in a controlled environment. The results demonstrate the possibility to unmix spectral responses constructed from extremely sparse photon counts (less than 10 photons per pixel and band).« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24778601','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24778601"><span>Numerical study of entropy generation due to coupled laminar and turbulent mixed convection and thermal radiation in an enclosure filled with a semitransparent medium.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Goodarzi, M; Safaei, M R; Oztop, Hakan F; Karimipour, A; Sadeghinezhad, E; Dahari, M; Kazi, S N; Jomhari, N</p> <p>2014-01-01</p> <p>The effect of radiation on laminar and turbulent mixed convection heat transfer of a semitransparent medium in a square enclosure was studied numerically using the Finite Volume Method. A structured mesh and the SIMPLE algorithm were utilized to model the governing equations. Turbulence and radiation were modeled with the RNG k-ε model and Discrete Ordinates (DO) model, respectively. For Richardson numbers ranging from 0.1 to 10, simulations were performed for Rayleigh numbers in laminar flow (10⁴) and turbulent flow (10⁸). The model predictions were validated against previous numerical studies and good agreement was observed. The simulated results indicate that for laminar and turbulent motion states, computing the radiation heat transfer significantly enhanced the Nusselt number (Nu) as well as the heat transfer coefficient. Higher Richardson numbers did not noticeably affect the average Nusselt number and corresponding heat transfer rate. Besides, as expected, the heat transfer rate for the turbulent flow regime surpassed that in the laminar regime. The simulations additionally demonstrated that for a constant Richardson number, computing the radiation heat transfer majorly affected the heat transfer structure in the enclosure; however, its impact on the fluid flow structure was negligible.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3981010','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3981010"><span>Numerical Study of Entropy Generation due to Coupled Laminar and Turbulent Mixed Convection and Thermal Radiation in an Enclosure Filled with a Semitransparent Medium</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Goodarzi, M.; Safaei, M. R.; Oztop, Hakan F.; Karimipour, A.; Sadeghinezhad, E.; Dahari, M.; Kazi, S. N.; Jomhari, N.</p> <p>2014-01-01</p> <p>The effect of radiation on laminar and turbulent mixed convection heat transfer of a semitransparent medium in a square enclosure was studied numerically using the Finite Volume Method. A structured mesh and the SIMPLE algorithm were utilized to model the governing equations. Turbulence and radiation were modeled with the RNG k-ε model and Discrete Ordinates (DO) model, respectively. For Richardson numbers ranging from 0.1 to 10, simulations were performed for Rayleigh numbers in laminar flow (104) and turbulent flow (108). The model predictions were validated against previous numerical studies and good agreement was observed. The simulated results indicate that for laminar and turbulent motion states, computing the radiation heat transfer significantly enhanced the Nusselt number (Nu) as well as the heat transfer coefficient. Higher Richardson numbers did not noticeably affect the average Nusselt number and corresponding heat transfer rate. Besides, as expected, the heat transfer rate for the turbulent flow regime surpassed that in the laminar regime. The simulations additionally demonstrated that for a constant Richardson number, computing the radiation heat transfer majorly affected the heat transfer structure in the enclosure; however, its impact on the fluid flow structure was negligible. PMID:24778601</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFMNG41E..01W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFMNG41E..01W"><span>Data assimilation in the low noise regime</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Weare, J.; Vanden-Eijnden, E.</p> <p>2012-12-01</p> <p>On-line data assimilation techniques such as ensemble Kalman filters and particle filters tend to lose accuracy dramatically when presented with an unlikely observation. Such observation may be caused by an unusually large measurement error or reflect a rare fluctuation in the dynamics of the system. Over a long enough span of time it becomes likely that one or several of these events will occur. In some cases they are signatures of the most interesting features of the underlying system and their prediction becomes the primary focus of the data assimilation procedure. The Kuroshio or Black Current that runs along the eastern coast of Japan is an example of just such a system. It undergoes infrequent but dramatic changes of state between a small meander during which the current remains close to the coast of Japan, and a large meander during which the current bulges away from the coast. Because of the important role that the Kuroshio plays in distributing heat and salinity in the surrounding region, prediction of these transitions is of acute interest. { Here we focus on a regime in which both the stochastic forcing on the system and the observational noise are small. In this setting large deviation theory can be used to understand why standard filtering methods fail and guide the design of the more effective data assimilation techniques. Motivated by our large deviations analysis we propose several data assimilation strategies capable of efficiently handling rare events such as the transitions of the Kuroshio. These techniques are tested on a model of the Kuroshio and shown to perform much better than standard filtering methods.Here the sequence of observations (circles) are taken directly from one of our Kuroshio model's transition events from the small meander to the large meander. We tested two new algorithms (Algorithms 3 and 4 in the legend) motivated by our large deviations analysis as well as a standard particle filter and an ensemble Kalman filter. The parameters of each algorithm are chosen so that their costs are comparable. The particle filter and an ensemble Kalman filter fail to accurately track the transition. Algorithms 3 and 4 maintain accuracy (and smaller scale resolution) throughout the transition.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.A33J3325A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.A33J3325A"><span>Assessment of 10-Year Global Record of Aerosol Products from the OMI Near-UV Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ahn, C.; Torres, O.; Jethva, H. T.</p> <p>2014-12-01</p> <p>Global observations of aerosol properties from space are critical for understanding climate change and air quality applications. The Ozone Monitoring Instrument (OMI) onboard the EOS-Aura satellite provides information on aerosol optical properties by making use of the large sensitivity to aerosol absorption and dark surface albedo in the UV spectral region. These unique features enable us to retrieve both aerosol extinction optical depth (AOD) and single scattering albedo (SSA) successfully from radiance measurements at 354 and 388 nm by the OMI near UV aerosol algorithm (OMAERUV). Recent improvements to algorithms in conjunction with the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and Atmospheric Infrared Sounder (AIRS) carbon monoxide data also reduce uncertainties due to aerosol layer heights and types significantly in retrieved products. We present validation results of OMI AOD against space and time collocated Aerosol Robotic Network (AERONET) measured AOD values over multiple stations representing major aerosol episodes and regimes. We also compare the OMI SSA against the inversion made by AERONET as well as an independent network of ground-based radiometer called SKYNET in Japan, China, South-East Asia, India, and Europe. The outcome of the evaluation analysis indicates that in spite of the "row anomaly" problem, affecting the sensor since mid-2007, the long-term aerosol record shows remarkable sensor stability. The OMAERUV 10-year global aerosol record is publicly available at the NASA data service center web site (http://disc.sci.gsfc.nasa.gov/Aura/data-holdings/OMI/omaeruv_v003.shtml).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1424117-spatiotemporal-multiplexing-based-hexagonal-multicore-optical-fibres','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1424117-spatiotemporal-multiplexing-based-hexagonal-multicore-optical-fibres"><span>Spatiotemporal multiplexing based on hexagonal multicore optical fibres</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Chekhovskoy, I. S.; Sorokina, M. A.; Rubenchik, A. M.; ...</p> <p>2017-12-27</p> <p>Based on a genetic algorithm, we have solved in this paper the problem of finding the parameters of optical Gaussian pulses which make their efficient nonlinear combining possible in one of the peripheral cores of a 7-core hexagonal fibre. Two approaches based on individual selection of peak powers and field phases of the pulses launched into the fibre are considered. Finally, the found regimes of Gaussian pulse combining open up new possibilities for the development of devices for controlling optical radiation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008SPIE.7127E..0FZ','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008SPIE.7127E..0FZ"><span>Identification of two-phase flow regime based on electrical capacitance tomography and soft-sensing technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhao, Ming-fu; Hu, Xin-Yu; Shao, Yun; Luo, Bin-bin; Wang, Xin</p> <p>2008-10-01</p> <p>This article analyses nowadays in common use of football robots in China, intended to improve the football robots' hardware platform system's capability, and designed a football robot which based on DSP core controller, and combined Fuzzy-PID control algorithm. The experiment showed, because of the advantages of DSP, such as quickly operation, various of interfaces, low power dissipation etc. It has great improvement on the football robot's performance of movement, controlling precision, real-time performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120011649','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120011649"><span>Investigation of Transonic Wake Dynamics for Mechanically Deployable Entry Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stern, Eric; Barnhardt, Michael; Venkatapathy, Ethiraj; Candler, Graham; Prabhu, Dinesh</p> <p>2012-01-01</p> <p>A numerical investigation of transonic flow around a mechanically deployable entry system being considered for a robotic mission to Venus has been performed, and preliminary results are reported. The flow around a conceptual representation of the vehicle geometry was simulated at discrete points along a ballistic trajectory using Detached Eddy Simulation (DES). The trajectory points selected span the low supersonic to transonic regimes with freestream Mach numbers from 1:5 to 0:8, and freestream Reynolds numbers (based on diameter) between 2:09 x 10(exp 6) and 2:93 x 10(exp 6). Additionally, the Mach 0:8 case was simulated at angles of attack between 0 and 5 . Static aerodynamic coefficients obtained from the data show qualitative agreement with data from 70deg sphere-cone wind tunnel tests performed for the Viking program. Finally, the effect of choices of models and numerical algorithms is addressed by comparing the DES results to those using a Reynolds Averaged Navier-Stokes (RANS) model, as well as to results using a more dissipative numerical scheme.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..1714643P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..1714643P"><span>CDRD and PNPR passive microwave precipitation retrieval algorithms: verification study over Africa and Southern Atlantic</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Panegrossi, Giulia; Casella, Daniele; Cinzia Marra, Anna; Petracca, Marco; Sanò, Paolo; Dietrich, Stefano</p> <p>2015-04-01</p> <p>The ongoing NASA/JAXA Global Precipitation Measurement mission (GPM) requires the full exploitation of the complete constellation of passive microwave (PMW) radiometers orbiting around the globe for global precipitation monitoring. In this context the coherence of the estimates of precipitation using different passive microwave radiometers is a crucial need. We have developed two different passive microwave precipitation retrieval algorithms: one is the Cloud Dynamics Radiation Database algorithm (CDRD), a physically ¬based Bayesian algorithm for conically scanning radiometers (i.e., DMSP SSMIS); the other one is the Passive microwave Neural network Precipitation Retrieval (PNPR) algorithm for cross¬-track scanning radiometers (i.e., NOAA and MetOp¬A/B AMSU-¬A/MHS, and NPP Suomi ATMS). The algorithms, originally created for application over Europe and the Mediterranean basin, and used operationally within the EUMETSAT Satellite Application Facility on Support to Operational Hydrology and Water Management (H-SAF, http://hsaf.meteoam.it), have been recently modified and extended to Africa and Southern Atlantic for application to the MSG full disk area. The two algorithms are based on the same physical foundation, i.e., the same cloud-radiation model simulations as a priori information in the Bayesian solver and as training dataset in the neural network approach, and they also use similar procedures for identification of frozen background surface, detection of snowfall, and determination of a pixel based quality index of the surface precipitation retrievals. In addition, similar procedures for the screening of not ¬precipitating pixels are used. A novel algorithm for the detection of precipitation in tropical/sub-tropical areas has been developed. The precipitation detection algorithm shows a small rate of false alarms (also over arid/desert regions), a superior detection capability in comparison with other widely used screening algorithms, and it is applicable to all available PMW radiometers in the GPM constellation of satellites (including NPP Suomi ATMS, and GMI). Three years of SSMIS and AMSU/MHS data have been considered to carry out a verification study over Africa of the retrievals from the CDRD and PNPR algorithms. The precipitation products from the TRMM ¬Precipitation radar (PR) (TRMM product 2A25 and 2A23) have been used as ground truth. The results of this study aimed at assessing the accuracy of the precipitation retrievals in different climatic regions and precipitation regimes will be presented. Particular emphasis will be given to the analysis of the level of coherence of the precipitation estimates and patterns between the two algorithms exploiting different radiometers. Recent developments aimed at the full exploitation of the GPM constellation of satellites for optimal precipitation/drought monitoring will be also presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JSMTE..03.3405S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JSMTE..03.3405S"><span>Weighted community detection and data clustering using message passing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shi, Cheng; Liu, Yanchen; Zhang, Pan</p> <p>2018-03-01</p> <p>Grouping objects into clusters based on the similarities or weights between them is one of the most important problems in science and engineering. In this work, by extending message-passing algorithms and spectral algorithms proposed for an unweighted community detection problem, we develop a non-parametric method based on statistical physics, by mapping the problem to the Potts model at the critical temperature of spin-glass transition and applying belief propagation to solve the marginals corresponding to the Boltzmann distribution. Our algorithm is robust to over-fitting and gives a principled way to determine whether there are significant clusters in the data and how many clusters there are. We apply our method to different clustering tasks. In the community detection problem in weighted and directed networks, we show that our algorithm significantly outperforms existing algorithms. In the clustering problem, where the data were generated by mixture models in the sparse regime, we show that our method works all the way down to the theoretical limit of detectability and gives accuracy very close to that of the optimal Bayesian inference. In the semi-supervised clustering problem, our method only needs several labels to work perfectly in classic datasets. Finally, we further develop Thouless-Anderson-Palmer equations which heavily reduce the computation complexity in dense networks but give almost the same performance as belief propagation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.H42A..01N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.H42A..01N"><span>Building the GPM-GV Column from the GPM Cold season Precipitation Experiment (Invited)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nesbitt, S. W.; Duffy, G. A.; Gleicher, K.; McFarquhar, G. M.; Kulie, M.; Williams, C. R.; Petersen, W. A.; Munchak, S. J.; Tokay, A.; Skofronick Jackson, G.; Chandrasekar, C. V.; Kollias, P.; Hudak, D. R.; Tanelli, S.</p> <p>2013-12-01</p> <p>Within the context of the Drop Size Distribution Working Group (DSDWG) of the Global Precipitation Mission-Ground Validation (GPM-GV) program, a major science and satellite precipitation algorithm validation focus is on quantitatively determining the variability of microphysical properties of precipitation in the vertical column, as well as the radiative properties of those particles at GPM-relevant microwave frequencies. The GPM Cold season Precipitation Experiment, or GCPEx, was conducted to address both of these objectives in mid-latitude winter precipitation. Radar observations at C, X, Ku, Ka, and W band from ground based scanning radars, profiling radars, and aircraft, as well as an aircraft passive microwave imager from GCPEx, conducted in early 2012 near Barrie, Ontario, Canada, can be used to constrain the observed reflectivites and brightness temperatures in snow as well as construct radar dual frequency ratios (DFRs) that can be used to identify regimes of microwave radiative properties in observed hydrometeor columns. These data can be directly matched with aircraft and ground based in situ microphysical probes, such as 2-D and bulk aircraft probes and surface disdrometers, to place the microphysical and microwave scattering and emission properties of the snow in context throughout the column of hydrometeors. In this presentation, particle scattering regimes will be identified in GCPEx hydrometeor columns storm events using a clustering technique in a multi-frequency DFR-near Rayleigh radar reflectivity phase space using matched ground-based and aircraft-based radar and passive microwave data. These data will be interpreted using matched in situ disdrometer and aircraft probe microphysical data (particle size distributions, habit identification, fall speed, mass-diameter relationships) derived during the events analyzed. This database is geared towards evaluating scattering simulations and the choice of integral particle size distributions for snow precipitation retrieval algorithms for ground and spaceborne radars at relevant wavelengths. A comparison of results for different cases with varying synoptic forcing and microphysical evolution will be presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900012185','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900012185"><span>Use of microwave satellite data to study variations in rainfall over the Indian Ocean</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hinton, Barry B.; Martin, David W.; Auvine, Brian; Olson, William S.</p> <p>1990-01-01</p> <p>The University of Wisconsin Space Science and Engineering Center mapped rainfall over the Indian Ocean using a newly developed Scanning Multichannel Microwave Radiometer (SMMR) rain-retrieval algorithm. The short-range objective was to characterize the distribution and variability of Indian Ocean rainfall on seasonal and annual scales. In the long-range, the objective is to clarify differences between land and marine regimes of monsoon rain. Researchers developed a semi-empirical algorithm for retrieving Indian Ocean rainfall. Tools for this development have come from radiative transfer and cloud liquid water models. Where possible, ground truth information from available radars was used in development and testing. SMMR rainfalls were also compared with Indian Ocean gauge rainfalls. Final Indian Ocean maps were produced for months, seasons, and years and interpreted in terms of historical analysis over the sub-continent.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930017360','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930017360"><span>Image-based ranging and guidance for rotorcraft</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Menon, P. K. A.</p> <p>1991-01-01</p> <p>This report documents the research carried out under NASA Cooperative Agreement No. NCC2-575 during the period Oct. 1988 - Dec. 1991. Primary emphasis of this effort was on the development of vision based navigation methods for rotorcraft nap-of-the-earth flight regime. A family of field-based ranging algorithms were developed during this research period. These ranging schemes are capable of handling both stereo and motion image sequences, and permits both translational and rotational camera motion. The algorithms require minimal computational effort and appear to be implementable in real time. A series of papers were presented on these ranging schemes, some of which are included in this report. A small part of the research effort was expended on synthesizing a rotorcraft guidance law that directly uses the vision-based ranging data. This work is discussed in the last section.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20000032956','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20000032956"><span>Surface Soil Moisture Retrieval Using SSM/I and Its Comparison with ESTAR: A Case Study Over a Grassland Region</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jackson, T.; Hsu, A. Y.; ONeill, P. E.</p> <p>1999-01-01</p> <p>This study extends a previous investigation on estimating surface soil moisture using the Special Sensor Microwave/Imager (SSM/I) over a grassland region. Although SSM/I is not optimal for soil moisture retrieval, it can under some conditions provide information. Rigorous analyses over land have been difficult due to the lack of good validation data sets. A scientific objective of the Southern Great Plains 1997 (SGP97) Hydrology Experiment was to investigate whether the retrieval algorithms for surface soil moisture developed at higher spatial resolution using truck-and aircraft-based passive microwave sensors can be extended to the coarser resolutions expected from satellite platform. With the data collected for the SGP97, the objective of this study is to compare the surface soil moisture estimated from the SSM/I data with those retrieved from the L-band Electronically Scanned Thinned Array Radiometer (ESTAR) data, the core sensor for the experiment, using the same retrieval algorithm. The results indicated that an error of estimate of 7.81% could be achieved with SSM/I data as contrasted to 2.82% with ESTAR data over three intensive sampling areas of different vegetation regimes. It confirms the results of previous study that SSM/I data can be used to retrieve surface soil moisture information at a regional scale under certain conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010lyot.confE..66S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010lyot.confE..66S"><span>Experimental Verification of Bayesian Planet Detection Algorithms with a Shaped Pupil Coronagraph</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Savransky, D.; Groff, T. D.; Kasdin, N. J.</p> <p>2010-10-01</p> <p>We evaluate the feasibility of applying Bayesian detection techniques to discovering exoplanets using high contrast laboratory data with simulated planetary signals. Background images are generated at the Princeton High Contrast Imaging Lab (HCIL), with a coronagraphic system utilizing a shaped pupil and two deformable mirrors (DMs) in series. Estimates of the electric field at the science camera are used to correct for quasi-static speckle and produce symmetric high contrast dark regions in the image plane. Planetary signals are added in software, or via a physical star-planet simulator which adds a second off-axis point source before the coronagraph with a beam recombiner, calibrated to a fixed contrast level relative to the source. We produce a variety of images, with varying integration times and simulated planetary brightness. We then apply automated detection algorithms such as matched filtering to attempt to extract the planetary signals. This allows us to evaluate the efficiency of these techniques in detecting planets in a high noise regime and eliminating false positives, as well as to test existing algorithms for calculating the required integration times for these techniques to be applicable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..DFD.L2001D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..DFD.L2001D"><span>Simulating compressible-incompressible two-phase flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Denner, Fabian; van Wachem, Berend</p> <p>2017-11-01</p> <p>Simulating compressible gas-liquid flows, e.g. air-water flows, presents considerable numerical issues and requires substantial computational resources, particularly because of the stiff equation of state for the liquid and the different Mach number regimes. Treating the liquid phase (low Mach number) as incompressible, yet concurrently considering the gas phase (high Mach number) as compressible, can improve the computational performance of such simulations significantly without sacrificing important physical mechanisms. A pressure-based algorithm for the simulation of two-phase flows is presented, in which a compressible and an incompressible fluid are separated by a sharp interface. The algorithm is based on a coupled finite-volume framework, discretised in conservative form, with a compressive VOF method to represent the interface. The bulk phases are coupled via a novel acoustically-conservative interface discretisation method that retains the acoustic properties of the compressible phase and does not require a Riemann solver. Representative test cases are presented to scrutinize the proposed algorithm, including the reflection of acoustic waves at the compressible-incompressible interface, shock-drop interaction and gas-liquid flows with surface tension. Financial support from the EPSRC (Grant EP/M021556/1) is gratefully acknowledged.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19890061394&hterms=treatment+gas&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dtreatment%2Bgas','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19890061394&hterms=treatment+gas&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dtreatment%2Bgas"><span>Hypersonic blunt body computations including real gas effects</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Montagne, J.-L.; Yee, H. C.; Klopfer, G. H.; Vinokur, M.</p> <p>1989-01-01</p> <p>Various second-order explicit and implicit TVD shock-capturing methods, a generalization of Roe's approximate Riemann solver, and a generalized flux-vector splitting scheme are used to study two-dimensional hypersonic real-gas flows. Special attention is given to the identification of some of the elements and parameters which can affect the convergence rate for high Mach numbers or real gases, but have negligible effect for low Mach numbers, for cases involving steady-state inviscid blunt flows. Blunt body calculations at Mach numbers of greater than 15 are performed to treat real-gas effects, and impinging shock results are obtained to test the treatment of slip surfaces and complex structures. Even with the addition of improvements, the convergence rate of algorithms in the hypersonic flow regime is found to be generally slower for a real gas than for a perfect gas.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100018576','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100018576"><span>Global Instability on Laminar Separation Bubbles-Revisited</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Theofilis, Vassilis; Rodriquez, Daniel; Smith, Douglas</p> <p>2010-01-01</p> <p>In the last 3 years, global linear instability of LSB has been revisited, using state-of-the-art hardware and algorithms. Eigenspectra of LSB flows have been understood and classified in branches of known and newly-discovered eigenmodes. Major achievements: World-largest numerical solutions of global eigenvalue problems are routinely performed. Key aerodynamic phenomena have been explained via critical point theory, applied to our global mode results. Theoretical foundation for control of LSB flows has been laid. Global mode of LSB at the origin of observable phenomena. U-separation on semi-infinite plate. Stall cells on (stalled) airfoil. Receptivity/Sensitivity/AFC feasible (practical?) via: Adjoint EVP solution. Direct/adjoint coupling (the Crete connection). Minor effect of compressibility on global instability in the subsonic compressible regime. Global instability analysis of LSB in realistic supersonic flows apparently quite some way down the horizon.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..DPPNM9008D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..DPPNM9008D"><span>Gamma beams generation with high intensity lasers for two photon Breit-Wheeler pair production</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>D'Humieres, Emmanuel; Ribeyre, Xavier; Jansen, Oliver; Esnault, Leo; Jequier, Sophie; Dubois, Jean-Luc; Hulin, Sebastien; Tikhonchuk, Vladimir; Arefiev, Alex; Toncian, Toma; Sentoku, Yasuhiko</p> <p>2017-10-01</p> <p>Linear Breit-Wheeler pair creation is the lowest threshold process in photon-photon interaction, controlling the energy release in Gamma Ray Bursts and Active Galactic Nuclei, but it has never been directly observed in the laboratory. Using numerical simulations, we demonstrate the possibility to produce collimated gamma beams with high energy conversion efficiency using high intensity lasers and innovative targets. When two of these beams collide at particular angles, our analytical calculations demonstrate a beaming effect easing the detection of the pairs in the laboratory. This effect has been confirmed in photon collision simulations using a recently developed innovative algorithm. An alternative scheme using Bremsstrahlung radiation produced by next generation high repetition rate laser systems is also being explored and the results of first optimization campaigns in this regime will be presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3460283','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3460283"><span>Retrospective analysis of 119 cases of pediatric acute promyelocytic leukemia: Comparisons of four treatment regimes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>LI, EN-QIN; XU, LING; ZHANG, ZHI-QUAN; XIAO, YAN; GUO, HAI-XIA; LUO, XUE-QUN; HU, QUN; LAI, DONG-BO; TU, LI-MING; JIN, RUN-MING</p> <p>2012-01-01</p> <p>Clinical trials have demonstrated that pediatric acute promyelocytic leukemia (APL) is highly curable. Small-scale studies have reported on the treatment of APL using one or two treatment regimes. Here, we report a multiple center-based study of 119 cases of pediatric APL treated with four regimes based on all-trans-retinoic acid (ATRA). We retrospectively analyzed the clinical characteristics, laboratorial test results and treatment outcome of the pediatric APL patients. Regime 1 used an in-house developed protocol, regime 2 was modified from the PETHEMA LPA99 protocol, regime 3 was modified from the European-APL93 protocol, and regime 4 used a protocol suggested by the British Committee for Standards in Haematology. The overall complete remission rates for the four regimes were 88.9, 87.5, 97.1 and 87.5%, respectively, which exhibited no statistical difference. However, more favorable results were observed for regimes 2 and 3 than regimes 1 and 4, in terms of the estimated 3.5-year disease-free survivals, relapse rates, drug toxicity (including hepatotoxicity, cardiac arrhythmia, and differentiation syndrome) and sepsis. In conclusion, the overall outcomes were more favorable after treatment with regimes 2 and 3 than with regimes 1 and 4, and this may have been due to the specific compositions of regimes 2 and 3. PMID:23060929</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016WRR....52.2439B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016WRR....52.2439B"><span>Benchmarking wide swath altimetry-based river discharge estimation algorithms for the Ganges river system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bonnema, Matthew G.; Sikder, Safat; Hossain, Faisal; Durand, Michael; Gleason, Colin J.; Bjerklie, David M.</p> <p>2016-04-01</p> <p>The objective of this study is to compare the effectiveness of three algorithms that estimate discharge from remotely sensed observables (river width, water surface height, and water surface slope) in anticipation of the forthcoming NASA/CNES Surface Water and Ocean Topography (SWOT) mission. SWOT promises to provide these measurements simultaneously, and the river discharge algorithms included here are designed to work with these data. Two algorithms were built around Manning's equation, the Metropolis Manning (MetroMan) method, and the Mean Flow and Geomorphology (MFG) method, and one approach uses hydraulic geometry to estimate discharge, the at-many-stations hydraulic geometry (AMHG) method. A well-calibrated and ground-truthed hydrodynamic model of the Ganges river system (HEC-RAS) was used as reference for three rivers from the Ganges River Delta: the main stem of Ganges, the Arial-Khan, and the Mohananda Rivers. The high seasonal variability of these rivers due to the Monsoon presented a unique opportunity to thoroughly assess the discharge algorithms in light of typical monsoon regime rivers. It was found that the MFG method provides the most accurate discharge estimations in most cases, with an average relative root-mean-squared error (RRMSE) across all three reaches of 35.5%. It is followed closely by the Metropolis Manning algorithm, with an average RRMSE of 51.5%. However, the MFG method's reliance on knowledge of prior river discharge limits its application on ungauged rivers. In terms of input data requirement at ungauged regions with no prior records, the Metropolis Manning algorithm provides a more practical alternative over a region that is lacking in historical observations as the algorithm requires less ancillary data. The AMHG algorithm, while requiring the least prior river data, provided the least accurate discharge measurements with an average wet and dry season RRMSE of 79.8% and 119.1%, respectively, across all rivers studied. This poor performance is directly traced to poor estimation of AMHG via a remotely sensed proxy, and results improve commensurate with MFG and MetroMan when prior AMHG information is given to the method. Therefore, we cannot recommend use of AMHG without inclusion of this prior information, at least for the studied rivers. The dry season discharge (within-bank flow) was captured well by all methods, while the wet season (floodplain flow) appeared more challenging. The picture that emerges from this study is that a multialgorithm approach may be appropriate during flood inundation periods in Ganges Delta.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080023282','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080023282"><span>Persistent Nature of Secondary Diurnal Modes of Precipitation over Oceanic and Continental Regimes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Yang, S.; Kuo, K.-S.; Smith, E.</p> <p>2007-01-01</p> <p>This investigation seeks a better understanding of the assorted mechanisms controlling the global distribution of precipitation diurnal variability based on the use of Tropical Rainfall Measuring Mission (TRMM) microwave radiometer and radar data. The horizontal distributions of precipitation's diurnal cycle are derived from eight years of TRMM Microwave Imager (TMI) and Precipitation Radar (PR) measurements involving three TRMM standard rain rate retrieval algorithms -- the resultant distributions analyzed at various spatiotemporal scales. The results reveal the prominent and expected late-evening to early-morning (LE-EM) precipitation maxima over oceans and the counterpart prominent and expected mid- to late-afternoon (MLA) maxima over continents. Moreover, and not generally recognized, the results reveal a widespread distribution of secondary maxima occurring over both oceans and continents -- maxima which generally mirror their counterpart regime's behavior. That is, many ocean regions exhibit clearcut secondary MLA precipitation maxima while many continental regions exhibit just as evident secondary LE-EM maxima. This investigation is the first comprehensive study of these globally prevalent secondary maxima and their widespread nature, a type of study only made possible when the analysis procedure is applied to a high-quality global-scale precipitation dataset. The characteristics of the secondary maxima are mapped and described on global grids using an innovative clock-face format, while a current study to be published at a later date provides physically-based explanations of the seasonal-regional distributions of the secondary maxima. In addition to an "explicit" maxima identification scheme, a "Fourier decomposition" maxima identification scheme is used to examine the amplitude and phase properties of the primary and secondary maxima -- as well as tertiary and quaternary maxima. Accordingly, the advantages, ambiguities, and pitfalls resulting from use of Fourier harmonic analysis are explained.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002JHyd..261..115C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002JHyd..261..115C"><span>Analysis of the linkage between rain and flood regime and its application to regional flood frequency estimation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cunderlik, Juraj M.; Burn, Donald H.</p> <p>2002-04-01</p> <p>Improving techniques of flood frequency estimation at ungauged sites is one of the foremost goals of contemporary hydrology. River flood regime is a resultant reflection of a composite catchment hydrologic response to flood producing processes. In this sense the process of identifying homogeneous pooling groups can be plausibly based on catchment similarity in flood regime. Unfortunately the application of any pooling approach that is based on flood regime is restricted to gauged sites. Because flood regime can be markedly determined by rainfall regime, catchment similarity in rainfall regime can be an alternative option for identifying flood frequency pooling groups. An advantage of such a pooling approach is that rainfall data are usually spatially and temporary more abundant than flood data and the approach can also be applied at ungauged sites. Therefore in this study we have quantified the linkage between rainfall and flood regime and explored the appropriateness of substituting rainfall regime for flood regime in regional pooling schemes. Two different approaches to describing rainfall regime similarity using tools of directional statistics have been tested and used for evaluation of the potential of rainfall regime for identification of hydrologically homogeneous pooling groups. The outputs were compared to an existing pooling framework adopted in the Flood Estimation Handbook. The results demonstrate that regional pooling based on rainfall regime information leads to a high number of initially homogeneous groups and seems to be a sound pooling alternative for catchments with a close linkage between rain and flood regimes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3323590','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3323590"><span>Multi-Dimensional, Mesoscopic Monte Carlo Simulations of Inhomogeneous Reaction-Drift-Diffusion Systems on Graphics-Processing Units</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Vigelius, Matthias; Meyer, Bernd</p> <p>2012-01-01</p> <p>For many biological applications, a macroscopic (deterministic) treatment of reaction-drift-diffusion systems is insufficient. Instead, one has to properly handle the stochastic nature of the problem and generate true sample paths of the underlying probability distribution. Unfortunately, stochastic algorithms are computationally expensive and, in most cases, the large number of participating particles renders the relevant parameter regimes inaccessible. In an attempt to address this problem we present a genuine stochastic, multi-dimensional algorithm that solves the inhomogeneous, non-linear, drift-diffusion problem on a mesoscopic level. Our method improves on existing implementations in being multi-dimensional and handling inhomogeneous drift and diffusion. The algorithm is well suited for an implementation on data-parallel hardware architectures such as general-purpose graphics processing units (GPUs). We integrate the method into an operator-splitting approach that decouples chemical reactions from the spatial evolution. We demonstrate the validity and applicability of our algorithm with a comprehensive suite of standard test problems that also serve to quantify the numerical accuracy of the method. We provide a freely available, fully functional GPU implementation. Integration into Inchman, a user-friendly web service, that allows researchers to perform parallel simulations of reaction-drift-diffusion systems on GPU clusters is underway. PMID:22506001</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19990064389','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19990064389"><span>Novel, Miniature Multi-Hole Probes and High-Accuracy Calibration Algorithms for their use in Compressible Flowfields</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rediniotis, Othon K.</p> <p>1999-01-01</p> <p>Two new calibration algorithms were developed for the calibration of non-nulling multi-hole probes in compressible, subsonic flowfields. The reduction algorithms are robust and able to reduce data from any multi-hole probe inserted into any subsonic flowfield to generate very accurate predictions of the velocity vector, flow direction, total pressure and static pressure. One of the algorithms PROBENET is based on the theory of neural networks, while the other is of a more conventional nature (polynomial approximation technique) and introduces a novel idea of local least-squares fits. Both algorithms have been developed to complete, user-friendly software packages. New technology was developed for the fabrication of miniature multi-hole probes, with probe tip diameters all the way down to 0.035". Several miniature 5- and 7-hole probes, with different probe tip geometries (hemispherical, conical, faceted) and different overall shapes (straight, cobra, elbow probes) were fabricated, calibrated and tested. Emphasis was placed on the development of four stainless-steel conical 7-hole probes, 1/16" in diameter calibrated at NASA Langley for the entire subsonic regime. The developed calibration algorithms were extensively tested with these probes demonstrating excellent prediction capabilities. The probes were used in the "trap wing" wind tunnel tests in the 14'x22' wind tunnel at NASA Langley, providing valuable information on the flowfield over the wing. This report is organized in the following fashion. It consists of a "Technical Achievements" section that summarizes the major achievements, followed by an assembly of journal articles that were produced from this project and ends with two manuals for the two probe calibration algorithms developed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002cosp...34E1828D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002cosp...34E1828D"><span>Dynamic ocean provinces: a multi-sensor approach to global marine ecophysiology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dowell, M.; Campbell, J.; Moore, T.</p> <p></p> <p>The concept of oceanic provinces or domains has existed for well over a century. Such systems, whether real or only conceptual, provide a useful framework for understanding the mechanisms controlling biological, physical and chemical processes and their interactions. Criteria have been established for defining provinces based on physical forcings, availability of light and nutrients, complexity of the marine food web, and other factors. In general, such classification systems reflect the heterogeneous nature of the ocean environment, and the effort of scientists to comprehend the whole system by understanding its various homogeneous components. If provinces are defined strictly on the basis of geospatial or temporal criteria (e.g., latitude zones, bathymetry, or season), the resulting maps exhibit discontinuities that are uncharacteristic of the ocean. While this may be useful for many purposes, it is unsatisfactory in that it does not capture the dynamic nature of fluid boundaries in the ocean. Boundaries fixed in time and space do not allow us to observe interannual or longer-term variability (e.g., regime shifts) that may result from climate change. The current study illustrates the potential of using fuzzy logic as a means of classifying the ocean into objectively defined provinces using properties measurable from satellite sensors (MODIS and SeaWiFS). This approach accommodates the dynamic variability of provinces which can be updated as each image is processed. We adopt this classification as the basis for parameterizing specific algorithms for each of the classes. Once the class specific algorithms have been applied, retrievals are then recomposed into a single blended product based on the "weighted" fuzzy memberships. This will be demonstrated through animations of multi-year time- series of monthly composites of the individual classes or provinces. The provinces themselves are identified on the basis of global fields of chlorophyll, sea surface temperature and PAR which will also be subsequently used to parameterize primary production (PP) algorithms. Two applications of the proposed dynamic classification are presented. The first applies different peer-reviewed PP algorithms to the different classes and objectively evaluates their performance to select the algorithm which performs best, and then merges results into a single primary production product. A second application illustrates the variability of P I parameters in each province and- analyzes province specific variability in the quantum yield of photosynthesis. Finally results illustrating how this approach is implemented in estimating global oceanic primary production are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ApJ...763..102J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ApJ...763..102J"><span>Nonlinear Evolution of Rayleigh-Taylor Instability in a Radiation-supported Atmosphere</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jiang, Yan-Fei; Davis, Shane W.; Stone, James M.</p> <p>2013-02-01</p> <p>The nonlinear regime of Rayleigh-Taylor instability (RTI) in a radiation supported atmosphere, consisting of two uniform fluids with different densities, is studied numerically. We perform simulations using our recently developed numerical algorithm for multi-dimensional radiation hydrodynamics based on a variable Eddington tensor (VET) as implemented in Athena, focusing on the regime where scattering opacity greatly exceeds absorption opacity. We find that the radiation field can reduce the growth and mixing rate of RTI, but this reduction is only significant when radiation pressure significantly exceeds gas pressure. Small-scale structures are also suppressed in this case. In the nonlinear regime, dense fingers sink faster than rarefied bubbles can rise, leading to asymmetric structures about the interface. By comparing the calculations that use a VET versus the Eddington approximation, we demonstrate that anisotropy in the radiation field can affect the nonlinear development of RTI significantly. We also examine the disruption of a shell of cold gas being accelerated by strong radiation pressure, motivated by models of radiation driven outflows in ultraluminous infrared galaxies. We find that when the growth timescale of RTI is smaller than acceleration timescale, the amount of gas that would be pushed away by the radiation field is reduced due to RTI.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20000116202','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20000116202"><span>OCTS And Seawifs Bio-Optical Algorithm and Product Vaildattion and Intercomparison in US Coastal Waters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Brow, Chirstopher; Subramaniam, Ajit; Culver, Mary; Brock, John C.</p> <p>2000-01-01</p> <p>Monitoring the health of U.S. coastal waters is an important goal of the National Oceanic and Atmospheric Administration (NOAA). Satellite sensors are capable of providing daily synoptic data of large expanses of the U.S. coast. Ocean color sensor, in particular, can be used to monitor the water quality of coastal waters on an operational basis. To appraise the validity of satellite-derived measurements, such as chlorophyll concentration, the bio-optical algorithms used to derive them must be evaluated in coastal environments. Towards this purpose, over 21 cruises in diverse U.S. coastal waters have been conducted. Of these 21 cruises, 12 have been performed in conjunction with and under the auspices of the NASA/SIMBIOS Project. The primary goal of these cruises has been to obtain in-situ measurements of downwelling irradiance, upwelling radiance, and chlorophyll concentrations in order to evaluate bio-optical algorithms that estimate chlorophyll concentration. In this Technical Memorandum, we evaluate the ability of five bio-optical algorithms, including the current SeaWiFS algorithm, to estimate chlorophyll concentration in surface waters of the South Atlantic Bight (SAB). The SAB consists of a variety of environments including coastal and continental shelf regimes, Gulf Stream waters, and the Sargasso Sea. The biological and optical characteristics of the region is complicated by temporal and spatial variability in phytoplankton composition, primary productivity, and the concentrations of colored dissolved organic matter (CDOM) and suspended sediment. As such, the SAB is an ideal location to test the robustness of algorithms for coastal use.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19780025504','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19780025504"><span>Minimum film thickness in elliptical contacts for different regimes of fluid-film lubrication</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hamrock, B. J.; Dowson, D.</p> <p>1978-01-01</p> <p>The film-parameter equations are provided for four fluid-film lubrication regimes found in elliptical contacts. These regimes are isoviscous-rigid; viscous-rigid; elastohydrodynamic of low-elastic-modulus materials, or isoviscous-elastic; and elastohydrodynamic, or viscous-elastic. The influence or lack of influence of elastic and viscous effects is the factor that distinguishes these regimes. The film-parameter equations for the respective regimes come from earlier theoretical studies by the authors on elastohydrodynamic and hydrodynamic lubrication of elliptical conjunctions. These equations are restated and the results are presented as a map of the lubrication regimes, with film-thickness contours on a log-log grid of the viscosity and elasticity parameters for five values of the ellipticity parameter. The results present a complete theoretical film-parameter solution for elliptical contacts in the four lubrication regimes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ990979.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ990979.pdf"><span>Antecedents of Teachers Fostering Effort within Two Different Management Regimes: An Assessment-Based Accountability Regime and Regime without External Pressure on Results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Christophersen, Knut-Andreas; Elstad, Eyvind; Turmo, Are</p> <p>2012-01-01</p> <p>This article focuses on the comparison of organizational antecedents of teachers' fostering of students' effort in two quite different accountability regimes: one management regime with an external-accountability system and one with no external accountability devices. The methodology involves cross-sectional surveys from two different management…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5813791','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5813791"><span>Clustering the Orion B giant molecular cloud based on its molecular emission</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bron, Emeric; Daudon, Chloé; Pety, Jérôme; Levrier, François; Gerin, Maryvonne; Gratier, Pierre; Orkisz, Jan H.; Guzman, Viviana; Bardeau, Sébastien; Goicoechea, Javier R.; Liszt, Harvey; Öberg, Karin; Peretto, Nicolas; Sievers, Albrecht; Tremblin, Pascal</p> <p>2017-01-01</p> <p>Context Previous attempts at segmenting molecular line maps of molecular clouds have focused on using position-position-velocity data cubes of a single molecular line to separate the spatial components of the cloud. In contrast, wide field spectral imaging over a large spectral bandwidth in the (sub)mm domain now allows one to combine multiple molecular tracers to understand the different physical and chemical phases that constitute giant molecular clouds (GMCs). Aims We aim at using multiple tracers (sensitive to different physical processes and conditions) to segment a molecular cloud into physically/chemically similar regions (rather than spatially connected components), thus disentangling the different physical/chemical phases present in the cloud. Methods We use a machine learning clustering method, namely the Meanshift algorithm, to cluster pixels with similar molecular emission, ignoring spatial information. Clusters are defined around each maximum of the multidimensional Probability Density Function (PDF) of the line integrated intensities. Simple radiative transfer models were used to interpret the astrophysical information uncovered by the clustering analysis. Results A clustering analysis based only on the J = 1 – 0 lines of three isotopologues of CO proves suffcient to reveal distinct density/column density regimes (nH ~ 100 cm−3, ~ 500 cm−3, and > 1000 cm−3), closely related to the usual definitions of diffuse, translucent and high-column-density regions. Adding two UV-sensitive tracers, the J = 1 − 0 line of HCO+ and the N = 1 − 0 line of CN, allows us to distinguish two clearly distinct chemical regimes, characteristic of UV-illuminated and UV-shielded gas. The UV-illuminated regime shows overbright HCO+ and CN emission, which we relate to a photochemical enrichment effect. We also find a tail of high CN/HCO+ intensity ratio in UV-illuminated regions. Finer distinctions in density classes (nH ~ 7 × 103 cm−3 ~ 4 × 104 cm−3) for the densest regions are also identified, likely related to the higher critical density of the CN and HCO+ (1 – 0) lines. These distinctions are only possible because the high-density regions are spatially resolved. Conclusions Molecules are versatile tracers of GMCs because their line intensities bear the signature of the physics and chemistry at play in the gas. The association of simultaneous multi-line, wide-field mapping and powerful machine learning methods such as the Meanshift clustering algorithm reveals how to decode the complex information available in these molecular tracers. PMID:29456256</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25785933','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25785933"><span>Tile-based Fisher ratio analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC × GC-TOFMS) data using a null distribution approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Parsons, Brendon A; Marney, Luke C; Siegler, W Christopher; Hoggard, Jamin C; Wright, Bob W; Synovec, Robert E</p> <p>2015-04-07</p> <p>Comprehensive two-dimensional (2D) gas chromatography coupled with time-of-flight mass spectrometry (GC × GC-TOFMS) is a versatile instrumental platform capable of collecting highly informative, yet highly complex, chemical data for a variety of samples. Fisher-ratio (F-ratio) analysis applied to the supervised comparison of sample classes algorithmically reduces complex GC × GC-TOFMS data sets to find class distinguishing chemical features. F-ratio analysis, using a tile-based algorithm, significantly reduces the adverse effects of chromatographic misalignment and spurious covariance of the detected signal, enhancing the discovery of true positives while simultaneously reducing the likelihood of detecting false positives. Herein, we report a study using tile-based F-ratio analysis whereby four non-native analytes were spiked into diesel fuel at several concentrations ranging from 0 to 100 ppm. Spike level comparisons were performed in two regimes: comparing the spiked samples to the nonspiked fuel matrix and to each other at relative concentration factors of two. Redundant hits were algorithmically removed by refocusing the tiled results onto the original high resolution pixel level data. To objectively limit the tile-based F-ratio results to only features which are statistically likely to be true positives, we developed a combinatorial technique using null class comparisons, called null distribution analysis, by which we determined a statistically defensible F-ratio cutoff for the analysis of the hit list. After applying null distribution analysis, spiked analytes were reliably discovered at ∼1 to ∼10 ppm (∼5 to ∼50 pg using a 200:1 split), depending upon the degree of mass spectral selectivity and 2D chromatographic resolution, with minimal occurrence of false positives. To place the relevance of this work among other methods in this field, results are compared to those for pixel and peak table-based approaches.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20170005809&hterms=sea&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dsea','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20170005809&hterms=sea&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dsea"><span>Operational Implementation of Sea Ice Concentration Estimates from the AMSR2 Sensor</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Meier, Walter N.; Stewart, J. Scott; Liu, Yinghui; Key, Jeffrey; Miller, Jeffrey A.</p> <p>2017-01-01</p> <p>An operation implementation of a passive microwave sea ice concentration algorithm to support NOAA's operational mission is presented. The NASA team 2 algorithm, previously developed for the NASA advanced microwave scanning radiometer for the Earth observing system (AMSR-E) product suite, is adapted for operational use with the JAXA AMSR2 sensor through several enhancements. First, the algorithm is modified to process individual swaths and provide concentration from the most recent swaths instead of a 24-hour average. A latency (time since observation) field and a 24-hour concentration range (maximum-minimum) are included to provide indications of data timeliness and variability. Concentration from the Bootstrap algorithm is a secondary field to provide complementary sea ice information. A quality flag is implemented to provide information on interpolation, filtering, and other quality control steps. The AMSR2 concentration fields are compared with a different AMSR2 passive microwave product, and then validated via comparison with sea ice concentration from the Suomi visible and infrared imaging radiometer suite. This validation indicates the AMSR2 concentrations have a bias of 3.9% and an RMSE of 11.0% in the Arctic, and a bias of 4.45% and RMSE of 8.8% in the Antarctic. In most cases, the NOAA operational requirements for accuracy are met. However, in low-concentration regimes, such as during melt and near the ice edge, errors are higher because of the limitations of passive microwave sensors and the algorithm retrieval.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16715146','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16715146"><span>Multimodel Kalman filtering for adaptive nonuniformity correction in infrared sensors.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pezoa, Jorge E; Hayat, Majeed M; Torres, Sergio N; Rahman, Md Saifur</p> <p>2006-06-01</p> <p>We present an adaptive technique for the estimation of nonuniformity parameters of infrared focal-plane arrays that is robust with respect to changes and uncertainties in scene and sensor characteristics. The proposed algorithm is based on using a bank of Kalman filters in parallel. Each filter independently estimates state variables comprising the gain and the bias matrices of the sensor, according to its own dynamic-model parameters. The supervising component of the algorithm then generates the final estimates of the state variables by forming a weighted superposition of all the estimates rendered by each Kalman filter. The weights are computed and updated iteratively, according to the a posteriori-likelihood principle. The performance of the estimator and its ability to compensate for fixed-pattern noise is tested using both simulated and real data obtained from two cameras operating in the mid- and long-wave infrared regime.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3799987','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3799987"><span>A network of spiking neurons for computing sparse representations in an energy efficient way</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.</p> <p>2013-01-01</p> <p>Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22920853','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22920853"><span>A network of spiking neurons for computing sparse representations in an energy-efficient way.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B</p> <p>2012-11-01</p> <p>Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhA...50v3001B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhA...50v3001B"><span>Hand-waving and interpretive dance: an introductory course on tensor networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bridgeman, Jacob C.; Chubb, Christopher T.</p> <p>2017-06-01</p> <p>The curse of dimensionality associated with the Hilbert space of spin systems provides a significant obstruction to the study of condensed matter systems. Tensor networks have proven an important tool in attempting to overcome this difficulty in both the numerical and analytic regimes. These notes form the basis for a seven lecture course, introducing the basics of a range of common tensor networks and algorithms. In particular, we cover: introductory tensor network notation, applications to quantum information, basic properties of matrix product states, a classification of quantum phases using tensor networks, algorithms for finding matrix product states, basic properties of projected entangled pair states, and multiscale entanglement renormalisation ansatz states. The lectures are intended to be generally accessible, although the relevance of many of the examples may be lost on students without a background in many-body physics/quantum information. For each lecture, several problems are given, with worked solutions in an ancillary file.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MNRAS.478..218P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MNRAS.478..218P"><span>Improving timing sensitivity in the microhertz frequency regime: limits from PSR J1713+0747 on gravitational waves produced by supermassive black hole binaries</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Perera, B. B. P.; Stappers, B. W.; Babak, S.; Keith, M. J.; Antoniadis, J.; Bassa, C. G.; Caballero, R. N.; Champion, D. J.; Cognard, I.; Desvignes, G.; Graikou, E.; Guillemot, L.; Janssen, G. H.; Karuppusamy, R.; Kramer, M.; Lazarus, P.; Lentati, L.; Liu, K.; Lyne, A. G.; McKee, J. W.; Osłowski, S.; Perrodin, D.; Sanidas, S. A.; Sesana, A.; Shaifullah, G.; Theureau, G.; Verbiest, J. P. W.; Taylor, S. R.</p> <p>2018-07-01</p> <p>We search for continuous gravitational waves (CGWs) produced by individual supermassive black hole binaries in circular orbits using high-cadence timing observations of PSR J1713+0747. We observe this millisecond pulsar using the telescopes in the European Pulsar Timing Array with an average cadence of approximately 1.6 d over the period between 2011 April and 2015 July, including an approximately daily average between 2013 February and 2014 April. The high-cadence observations are used to improve the pulsar timing sensitivity across the gravitational wave frequency range of 0.008-5μHz. We use two algorithms in the analysis, including a spectral fitting method and a Bayesian approach. For an independent comparison, we also use a previously published Bayesian algorithm. We find that the Bayesian approaches provide optimal results and the timing observations of the pulsar place a 95 per cent upper limit on the sky-averaged strain amplitude of CGWs to be ≲3.5 × 10-13 at a reference frequency of 1 μHz. We also find a 95 per cent upper limit on the sky-averaged strain amplitude of low-frequency CGWs to be ≲1.4 × 10-14 at a reference frequency of 20 nHz.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MNRAS.tmp.1062P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MNRAS.tmp.1062P"><span>Improving timing sensitivity in the microhertz frequency regime: limits from PSR J1713+0747 on gravitational waves produced by super-massive black-hole binaries</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Perera, B. B. P.; Stappers, B. W.; Babak, S.; Keith, M. J.; Antoniadis, J.; Bassa, C. G.; Caballero, R. N.; Champion, D. J.; Cognard, I.; Desvignes, G.; Graikou, E.; Guillemot, L.; Janssen, G. H.; Karuppusamy, R.; Kramer, M.; Lazarus, P.; Lentati, L.; Liu, K.; Lyne, A. G.; McKee, J. W.; Osłowski, S.; Perrodin, D.; Sanidas, S. A.; Sesana, A.; Shaifullah, G.; Theureau, G.; Verbiest, J. P. W.; Taylor, S. R.</p> <p>2018-05-01</p> <p>We search for continuous gravitational waves (CGWs) produced by individual super-massive black-hole binaries (SMBHBs) in circular orbits using high-cadence timing observations of PSR J1713+0747. We observe this millisecond pulsar using the telescopes in the European Pulsar Timing Array (EPTA) with an average cadence of approximately 1.6 days over the period between April 2011 and July 2015, including an approximately daily average between February 2013 and April 2014. The high-cadence observations are used to improve the pulsar timing sensitivity across the GW frequency range of 0.008 - 5 μHz. We use two algorithms in the analysis, including a spectral fitting method and a Bayesian approach. For an independent comparison, we also use a previously published Bayesian algorithm. We find that the Bayesian approaches provide optimal results and the timing observations of the pulsar place a 95 per cent upper limit on the sky-averaged strain amplitude of CGWs to be ≲ 3.5 × 10-13 at a reference frequency of 1 μHz. We also find a 95 per cent upper limit on the sky-averaged strain amplitude of low-frequency CGWs to be ≲ 1.4 × 10-14 at a reference frequency of 20 nHz.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002APS..DFD.JJ009C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002APS..DFD.JJ009C"><span>A stochastic multi-scale method for turbulent premixed combustion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cha, Chong M.</p> <p>2002-11-01</p> <p>The stochastic chemistry algorithm of Bunker et al. and Gillespie is used to perform the chemical reactions in a transported probability density function (PDF) modeling approach of turbulent combustion. Recently, Kraft & Wagner have demonstrated a 100-fold gain in computational speed (for a 100 species mechanism) using the stochastic approach over the conventional, direct integration method of solving for the chemistry. Here, the stochastic chemistry algorithm is applied to develop a new transported PDF model of turbulent premixed combustion. The methodology relies on representing the relevant spatially dependent physical processes as queuing events. The canonical problem of a one-dimensional premixed flame is used for validation. For the laminar case, molecular diffusion is described by a random walk. For the turbulent case, one of two different material transport submodels can provide the necessary closure: Taylor dispersion or Kerstein's one-dimensional turbulence approach. The former exploits ``eddy diffusivity'' and hence would be much more computationally tractable for practical applications. Various validation studies are performed. Results from the Monte Carlo simulations compare well to asymptotic solutions of laminar premixed flames, both with and without high activation temperatures. The correct scaling of the turbulent burning velocity is predicted in both Damköhler's small- and large-scale turbulence limits. The effect of applying the eddy diffusivity concept in the various regimes is discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006APS..DPPJP1115P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006APS..DPPJP1115P"><span>Fully implicit adaptive mesh refinement algorithm for reduced MHD</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Philip, Bobby; Pernice, Michael; Chacon, Luis</p> <p>2006-10-01</p> <p>In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhDT........16M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhDT........16M"><span>High-Power Helicon Double Gun Thruster</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Murakami, Nao</p> <p></p> <p>While chemical propulsion is necessary to launch a spacecraft from a planetary surface into space, electric propulsion has the potential to provide significant cost savings for the orbital transfer of payloads between planets. Due to extended wave particle interactions, a plasma thruster that can operate in the 100 kW to several MW power regime can only be attained by increasing the size of the thruster, or by using an array of plasma thrusters. The High-Power Helicon (HPH) Double Gun thruster experiment examines whether firing two helicon thrusters in parallel produces an exhaust velocity higher than the exhaust velocity of a single thruster. The scaling law that relates the downstream plasma velocity with the number of helicon antennae is derived, and compared with the experimental result. In conjunction with data analysis, two digital filtering algorithms are developed to filter out the noise from helicon antennae. The scaling law states that the downstream plasma velocity is proportional to square root of the number of helicon antennae, which is in agreement with the experimental result.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25312930','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25312930"><span>Sparsity-based Poisson denoising with dictionary learning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Giryes, Raja; Elad, Michael</p> <p>2014-12-01</p> <p>The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MeScT..29g5204S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MeScT..29g5204S"><span>A novel algorithm for laser self-mixing sensors used with the Kalman filter to measure displacement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sun, Hui; Liu, Ji-Gou</p> <p>2018-07-01</p> <p>This paper proposes a simple and effective method for estimating the feedback level factor C in a self-mixing interferometric sensor. It is used with a Kalman filter to retrieve the displacement. Without the complicated and onerous calculation process of the general C estimation method, a final equation is obtained. Thus, the estimation of C only involves a few simple calculations. It successfully retrieves the sinusoidal and aleatory displacement by means of simulated self-mixing signals in both weak and moderate feedback regimes. To deal with the errors resulting from noise and estimate bias of C and to further improve the retrieval precision, a Kalman filter is employed following the general phase unwrapping method. The simulation and experiment results show that the retrieved displacement using the C obtained with the proposed method is comparable to the joint estimation of C and α. Besides, the Kalman filter can significantly decrease measurement errors, especially the error caused by incorrectly locating the peak and valley positions of the signal.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21596383','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21596383"><span>Optimization of startup and shutdown operation of simulated moving bed chromatographic processes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Suzhou; Kawajiri, Yoshiaki; Raisch, Jörg; Seidel-Morgenstern, Andreas</p> <p>2011-06-24</p> <p>This paper presents new multistage optimal startup and shutdown strategies for simulated moving bed (SMB) chromatographic processes. The proposed concept allows to adjust transient operating conditions stage-wise, and provides capability to improve transient performance and to fulfill product quality specifications simultaneously. A specially tailored decomposition algorithm is developed to ensure computational tractability of the resulting dynamic optimization problems. By examining the transient operation of a literature separation example characterized by nonlinear competitive isotherm, the feasibility of the solution approach is demonstrated, and the performance of the conventional and multistage optimal transient regimes is evaluated systematically. The quantitative results clearly show that the optimal operating policies not only allow to significantly reduce both duration of the transient phase and desorbent consumption, but also enable on-spec production even during startup and shutdown periods. With the aid of the developed transient procedures, short-term separation campaigns with small batch sizes can be performed more flexibly and efficiently by SMB chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005APS..DFD.FB002R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005APS..DFD.FB002R"><span>Blood Flow in the Stenotic Carotid Bifurcation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rayz, Vitaliy</p> <p>2005-11-01</p> <p>The carotid artery is prone to atherosclerotic disease and the growth of plaque in the vessel, leading often to severe occlusion or plaque rupture, resulting in emboli and thrombus, and, possibly, stroke. Modeling the flow in stenotic blood vessels can elucidate the influence of the flow on plaque growth and stability. Numerical simulations are carried out to model the complex flows in anatomically realistic, patient-specific geometries constructed from magnetic resonance images. The 3-D unsteady Navier-Stokes equations are solved in a finite-volume formulation, using an iterative pressure-correction algorithm. The flow field computed is highly three-dimensional, with high-speed jets and strong recirculating secondary flows. Sharp spatial and temporal variations of the velocities and shear stresses are observed. The results are in a good agreement with the available experimental and clinical data. The influence of non-Newtonian blood behavior and arterial wall compliance are considered. Transitional and turbulent regimes have been looked at using LES. This work supports the conjecture that numerical simulations can provide a diagnostic tool for assessing plaque stability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5929353','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5929353"><span>A convolutional neural network-based screening tool for X-ray serial crystallography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ke, Tsung-Wei; Brewster, Aaron S.; Yu, Stella X.; Ushizima, Daniela; Yang, Chao; Sauter, Nicholas K.</p> <p>2018-01-01</p> <p>A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization. PMID:29714177</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1250801','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1250801"><span>Simulation of Cascaded Longitudinal-Space-Charge Amplifier at the Fermilab Accelerator Science & Technology (Fast) Facility</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Halavanau, A.; Piot, P.</p> <p>2015-12-01</p> <p>Cascaded Longitudinal Space Charge Amplifiers (LSCA) have been proposed as a mechanism to generate density modulation over a board spectral range. The scheme has been recently demonstrated in the optical regime and has confirmed the production of broadband optical radiation. In this paper we investigate, via numerical simulations, the performance of a cascaded LSCA beamline at the Fermilab Accelerator Science & Technology (FAST) facility to produce broadband ultraviolet radiation. Our studies are carried out using elegant with included tree-based grid-less space charge algorithm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29714177','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29714177"><span>A convolutional neural network-based screening tool for X-ray serial crystallography.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ke, Tsung Wei; Brewster, Aaron S; Yu, Stella X; Ushizima, Daniela; Yang, Chao; Sauter, Nicholas K</p> <p>2018-05-01</p> <p>A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization. open access.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1434392-convolutional-neural-network-based-screening-tool-ray-serial-crystallography','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1434392-convolutional-neural-network-based-screening-tool-ray-serial-crystallography"><span>A convolutional neural network-based screening tool for X-ray serial crystallography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Ke, Tsung-Wei; Brewster, Aaron S.; Yu, Stella X.; ...</p> <p>2018-04-24</p> <p>A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1434392-convolutional-neural-network-based-screening-tool-ray-serial-crystallography','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1434392-convolutional-neural-network-based-screening-tool-ray-serial-crystallography"><span>A convolutional neural network-based screening tool for X-ray serial crystallography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ke, Tsung-Wei; Brewster, Aaron S.; Yu, Stella X.</p> <p></p> <p>A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890011551','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890011551"><span>Additional development of the XTRAN3S computer program</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Borland, C. J.</p> <p>1989-01-01</p> <p>Additional developments and enhancements to the XTRAN3S computer program, a code for calculation of steady and unsteady aerodynamics, and associated aeroelastic solutions, for 3-D wings in the transonic flow regime are described. Algorithm improvements for the XTRAN3S program were provided including an implicit finite difference scheme to enhance the allowable time step and vectorization for improved computational efficiency. The code was modified to treat configurations with a fuselage, multiple stores/nacelles/pylons, and winglets. Computer program changes (updates) for error corrections and updates for version control are provided.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JPhA...46c5203B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JPhA...46c5203B"><span>Stochastic description of geometric phase for polarized waves in random media</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Boulanger, Jérémie; Le Bihan, Nicolas; Rossetto, Vincent</p> <p>2013-01-01</p> <p>We present a stochastic description of multiple scattering of polarized waves in the regime of forward scattering. In this regime, if the source is polarized, polarization survives along a few transport mean free paths, making it possible to measure an outgoing polarization distribution. We consider thin scattering media illuminated by a polarized source and compute the probability distribution function of the polarization on the exit surface. We solve the direct problem using compound Poisson processes on the rotation group SO(3) and non-commutative harmonic analysis. We obtain an exact expression for the polarization distribution which generalizes previous works and design an algorithm solving the inverse problem of estimating the scattering properties of the medium from the measured polarization distribution. This technique applies to thin disordered layers, spatially fluctuating media and multiple scattering systems and is based on the polarization but not on the signal amplitude. We suggest that it can be used as a non-invasive testing method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..DFDD31005E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..DFDD31005E"><span>An adjoint-based framework for maximizing mixing in binary fluids</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Eggl, Maximilian; Schmid, Peter</p> <p>2017-11-01</p> <p>Mixing in the inertial, but laminar parameter regime is a common application in a wide range of industries. Enhancing the efficiency of mixing processes thus has a fundamental effect on product quality, material homogeneity and, last but not least, production costs. In this project, we address mixing efficiency in the above mentioned regime (Reynolds number Re = 1000 , Peclet number Pe = 1000) by developing and demonstrating an algorithm based on nonlinear adjoint looping that minimizes the variance of a passive scalar field which models our binary Newtonian fluids. The numerical method is based on the FLUSI code (Engels et al. 2016), a Fourier pseudo-spectral code, which we modified and augmented by scalar transport and adjoint equations. Mixing is accomplished by moving stirrers which are numerically modeled using a penalization approach. In our two-dimensional simulations we consider rotating circular and elliptic stirrers and extract optimal mixing strategies from the iterative scheme. The case of optimizing shape and rotational speed of the stirrers will be demonstrated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1434406-high-flux-femtosecond-ray-emission-from-electron-hose-instability-laser-wakefield-accelerators','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1434406-high-flux-femtosecond-ray-emission-from-electron-hose-instability-laser-wakefield-accelerators"><span>High flux femtosecond x-ray emission from the electron-hose instability in laser wakefield accelerators</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Dong, C. F.; Zhao, T. Z.; Behm, K.</p> <p></p> <p>Here, bright and ultrashort duration x-ray pulses can be produced by through betatron oscillations of electrons during laser wakefield acceleration (LWFA). Our experimental measurements using the Hercules laser system demonstrate a dramatic increase in x-ray flux for interaction distances beyond the depletion/dephasing lengths, where the initial electron bunch injected into the first wake bucket catches up with the laser pulse front and the laser pulse depletes. A transition from an LWFA regime to a beam-driven plasma wakefield acceleration regime consequently occurs. The drive electron bunch is susceptible to the electron-hose instability and rapidly develops large amplitude oscillations in its tail,more » which leads to greatly enhanced x-ray radiation emission. We measure the x-ray flux as a function of acceleration length using a variable length gas cell. 3D particle-in-cell simulations using a Monte Carlo synchrotron x-ray emission algorithm elucidate the time-dependent variations in the radiation emission processes.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvS..21d1303D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvS..21d1303D"><span>High flux femtosecond x-ray emission from the electron-hose instability in laser wakefield accelerators</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dong, C. F.; Zhao, T. Z.; Behm, K.; Cummings, P. G.; Nees, J.; Maksimchuk, A.; Yanovsky, V.; Krushelnick, K.; Thomas, A. G. R.</p> <p>2018-04-01</p> <p>Bright and ultrashort duration x-ray pulses can be produced by through betatron oscillations of electrons during laser wakefield acceleration (LWFA). Our experimental measurements using the Hercules laser system demonstrate a dramatic increase in x-ray flux for interaction distances beyond the depletion/dephasing lengths, where the initial electron bunch injected into the first wake bucket catches up with the laser pulse front and the laser pulse depletes. A transition from an LWFA regime to a beam-driven plasma wakefield acceleration regime consequently occurs. The drive electron bunch is susceptible to the electron-hose instability and rapidly develops large amplitude oscillations in its tail, which leads to greatly enhanced x-ray radiation emission. We measure the x-ray flux as a function of acceleration length using a variable length gas cell. 3D particle-in-cell simulations using a Monte Carlo synchrotron x-ray emission algorithm elucidate the time-dependent variations in the radiation emission processes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22622284-stochastic-asymptotic-preserving-scheme-kinetic-fluid-model-disperse-two-phase-flows-uncertainty','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22622284-stochastic-asymptotic-preserving-scheme-kinetic-fluid-model-disperse-two-phase-flows-uncertainty"><span>A stochastic asymptotic-preserving scheme for a kinetic-fluid model for disperse two-phase flows with uncertainty</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, School of Mathematical Science, MOELSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Shu, Ruiwen, E-mail: rshu2@math.wisc.edu</p> <p></p> <p>In this paper we consider a kinetic-fluid model for disperse two-phase flows with uncertainty. We propose a stochastic asymptotic-preserving (s-AP) scheme in the generalized polynomial chaos stochastic Galerkin (gPC-sG) framework, which allows the efficient computation of the problem in both kinetic and hydrodynamic regimes. The s-AP property is proved by deriving the equilibrium of the gPC version of the Fokker–Planck operator. The coefficient matrices that arise in a Helmholtz equation and a Poisson equation, essential ingredients of the algorithms, are proved to be positive definite under reasonable and mild assumptions. The computation of the gPC version of a translation operatormore » that arises in the inversion of the Fokker–Planck operator is accelerated by a spectrally accurate splitting method. Numerical examples illustrate the s-AP property and the efficiency of the gPC-sG method in various asymptotic regimes.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018NJPh...20d3040K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018NJPh...20d3040K"><span>Interval stability for complex systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Klinshov, Vladimir V.; Kirillov, Sergey; Kurths, Jürgen; Nekorkin, Vladimir I.</p> <p>2018-04-01</p> <p>Stability of dynamical systems against strong perturbations is an important problem of nonlinear dynamics relevant to many applications in various areas. Here, we develop a novel concept of interval stability, referring to the behavior of the perturbed system during a finite time interval. Based on this concept, we suggest new measures of stability, namely interval basin stability (IBS) and interval stability threshold (IST). IBS characterizes the likelihood that the perturbed system returns to the stable regime (attractor) in a given time. IST provides the minimal magnitude of the perturbation capable to disrupt the stable regime for a given interval of time. The suggested measures provide important information about the system susceptibility to external perturbations which may be useful for practical applications. Moreover, from a theoretical viewpoint the interval stability measures are shown to bridge the gap between linear and asymptotic stability. We also suggest numerical algorithms for quantification of the interval stability characteristics and demonstrate their potential for several dynamical systems of various nature, such as power grids and neural networks.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1434406-high-flux-femtosecond-ray-emission-from-electron-hose-instability-laser-wakefield-accelerators','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1434406-high-flux-femtosecond-ray-emission-from-electron-hose-instability-laser-wakefield-accelerators"><span>High flux femtosecond x-ray emission from the electron-hose instability in laser wakefield accelerators</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Dong, C. F.; Zhao, T. Z.; Behm, K.; ...</p> <p>2018-04-24</p> <p>Here, bright and ultrashort duration x-ray pulses can be produced by through betatron oscillations of electrons during laser wakefield acceleration (LWFA). Our experimental measurements using the Hercules laser system demonstrate a dramatic increase in x-ray flux for interaction distances beyond the depletion/dephasing lengths, where the initial electron bunch injected into the first wake bucket catches up with the laser pulse front and the laser pulse depletes. A transition from an LWFA regime to a beam-driven plasma wakefield acceleration regime consequently occurs. The drive electron bunch is susceptible to the electron-hose instability and rapidly develops large amplitude oscillations in its tail,more » which leads to greatly enhanced x-ray radiation emission. We measure the x-ray flux as a function of acceleration length using a variable length gas cell. 3D particle-in-cell simulations using a Monte Carlo synchrotron x-ray emission algorithm elucidate the time-dependent variations in the radiation emission processes.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyA..493..148D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyA..493..148D"><span>Leverage effect, economic policy uncertainty and realized volatility with regime switching</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duan, Yinying; Chen, Wang; Zeng, Qing; Liu, Zhicao</p> <p>2018-03-01</p> <p>In this study, we first investigate the impacts of leverage effect and economic policy uncertainty (EPU) on future volatility in the framework of regime switching. Out-of-sample results show that the HAR-RV including the leverage effect and economic policy uncertainty with regimes can achieve higher forecast accuracy than RV-type and GARCH-class models. Our robustness results further imply that these factors in the framework of regime switching can substantially improve the HAR-RV's forecast performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.A51A0010R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.A51A0010R"><span>An Improved Wind Speed Retrieval Algorithm For The CYGNSS Mission</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ruf, C. S.; Clarizia, M. P.</p> <p>2015-12-01</p> <p>The NASA spaceborne Cyclone Global Navigation Satellite System (CYGNSS) mission is a constellation of 8 microsatellites focused on tropical cyclone (TC) inner core process studies. CYGNSS will be launched in October 2016, and will use GPS-Reflectometry (GPS-R) to measure ocean surface wind speed in all precipitating conditions, and with sufficient frequency to resolve genesis and rapid intensification. Here we present a modified and improved version of the current baseline Level 2 (L2) wind speed retrieval algorithm designed for CYGNSS. An overview of the current approach is first presented, which makes use of two different observables computed from 1-second Level 1b (L1b) delay-Doppler Maps (DDMs) of radar cross section. The first observable, the Delay-Doppler Map Average (DDMA), is the averaged radar cross section over a delay-Doppler window around the DDM peak (i.e. the specular reflection point coordinate in delay and Doppler). The second, the Leading Edge Slope (LES), is the leading edge of the Integrated Delay Waveform (IDW), obtained by integrating the DDM along the Doppler dimension. The observables are calculated over a limited range of time delays and Doppler frequencies to comply with baseline spatial resolution requirements for the retrieved winds, which in the case of CYGNSS is 25 km. In the current approach, the relationship between the observable value and the surface winds is described by an empirical Geophysical Model Function (GMF) that is characterized by a very high slope in the high wind regime, for both DDMA and LES observables, causing large errors in the retrieval at high winds. A simple mathematical modification of these observables is proposed, which linearizes the relationship between ocean surface roughness and the observables. This significantly reduces the non-linearity present in the GMF that relate the observables to the wind speed, and reduces the root-mean square error between true and retrieved winds, particularly in the high wind regime. The modified retrieval algorithm is tested using GPS-R synthetic data simulated using an End-to-End Simulator (E2ES) developed for CYGNSS, and it is then applied to GPS-R data from the TechDemoSat-1 (TDS-1) GPS-R experiment. An analysis of the algorithm performances for both synthetic and real data is illustrated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..MAR.M1157O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..MAR.M1157O"><span>Nematic phase in the CE-regime of colossal magnetoresistive manganites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ochoa, Emily; Sen, Cengiz; Dagotto, Elbio; Lamar/UTK Collaboration</p> <p></p> <p>We report nematic phase tendencies around the first order CE transition in the two-orbital double exchange model with Jahn-Teller phonons at electronic density n = 0 . 5 . Starting with a random state at high temperatures, we employ a careful cool-down method using a Monte Carlo algorithm. We then monitor the spin structure factor S (q) of the CE phase as a function of temperature. Near the critical temperature, S (q) grows with decreasing temperature for both right- and left-ordered CE ladders, followed by a spontaneous symmetry breaking into one or the other as the critical temperature is achieved. Below the critical temperature a pure CE state with a staggered charge order is obtained. Our results are similar to those observed in pnictides in earlier studies. Lamar University Office of Undergraduate Research, and U.S. Department of Energy, Office of Basic Energy Sciences, Materials Sciences and Engineering Division.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1369312-coherent-soft-ray-diffraction-imaging-coliphage-pr772-linac-coherent-light-source','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1369312-coherent-soft-ray-diffraction-imaging-coliphage-pr772-linac-coherent-light-source"><span>Coherent soft X-ray diffraction imaging of coliphage PR772 at the Linac coherent light source</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Reddy, Hemanth K. N.; Yoon, Chun Hong; Aquila, Andrew; ...</p> <p>2017-06-27</p> <p>Single-particle diffraction from X-ray Free Electron Lasers offers the potential for molecular structure determination without the need for crystallization. In an effort to further develop the technique, we present a dataset of coherent soft X-ray diffraction images of Coliphage PR772 virus, collected at the Atomic Molecular Optics (AMO) beamline with pnCCD detectors in the LAMP instrument at the Linac Coherent Light Source. The diameter of PR772 ranges from 65–70 nm, which is considerably smaller than the previously reported ~600 nm diameter Mimivirus. This reflects continued progress in XFEL-based single-particle imaging towards the single molecular imaging regime. As a result, themore » data set contains significantly more single particle hits than collected in previous experiments, enabling the development of improved statistical analysis, reconstruction algorithms, and quantitative metrics to determine resolution and self-consistency.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JCoPh.353..169B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JCoPh.353..169B"><span>A projection hybrid high order finite volume/finite element method for incompressible turbulent flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Busto, S.; Ferrín, J. L.; Toro, E. F.; Vázquez-Cendón, M. E.</p> <p>2018-01-01</p> <p>In this paper the projection hybrid FV/FE method presented in [1] is extended to account for species transport equations. Furthermore, turbulent regimes are also considered thanks to the k-ε model. Regarding the transport diffusion stage new schemes of high order of accuracy are developed. The CVC Kolgan-type scheme and ADER methodology are extended to 3D. The latter is modified in order to profit from the dual mesh employed by the projection algorithm and the derivatives involved in the diffusion term are discretized using a Galerkin approach. The accuracy and stability analysis of the new method are carried out for the advection-diffusion-reaction equation. Within the projection stage the pressure correction is computed by a piecewise linear finite element method. Numerical results are presented, aimed at verifying the formal order of accuracy of the scheme and to assess the performance of the method on several realistic test problems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1234856-identification-techniques-highly-boosted-bosons-decay-hadrons','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1234856-identification-techniques-highly-boosted-bosons-decay-hadrons"><span>Identification techniques for highly boosted W bosons that decay into hadrons</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Khachatryan, Vardan</p> <p>2014-12-02</p> <p>In searches for new physics in the energy regime of the LHC, it is becoming increasingly important to distinguish single-jet objects that originate from the merging of the decay products of W bosons produced with high transverse momenta from jets initiated by single partons. Algorithms are defined to identify such W jets for different signals of interest, using techniques that are also applicable to other decays of bosons to hadrons that result in a single jet, such as those from highly boosted Z and Higgs bosons. The efficiency for tagging W jets is measured in data collected with the CMSmore » detector at a center-of-mass energy of 8 TeV, corresponding to an integrated luminosity of 19.7 fb -1. The performance of W tagging in data is compared with predictions from several Monte Carlo simulators.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/20782603-lisa-data-analysis-using-genetic-algorithms','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/20782603-lisa-data-analysis-using-genetic-algorithms"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Crowder, Jeff; Cornish, Neil J.; Reddinger, J. Lucas</p> <p></p> <p>This work presents the first application of the method of genetic algorithms (GAs) to data analysis for the Laser Interferometer Space Antenna (LISA). In the low frequency regime of the LISA band there are expected to be tens of thousands of galactic binary systems that will be emitting gravitational waves detectable by LISA. The challenge of parameter extraction of such a large number of sources in the LISA data stream requires a search method that can efficiently explore the large parameter spaces involved. As signals of many of these sources will overlap, a global search method is desired. GAs representmore » such a global search method for parameter extraction of multiple overlapping sources in the LISA data stream. We find that GAs are able to correctly extract source parameters for overlapping sources. Several optimizations of a basic GA are presented with results derived from applications of the GA searches to simulated LISA data.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960045779','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960045779"><span>Coherent Lidar Design and Performance Verification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Frehlich, Rod</p> <p>1996-01-01</p> <p>This final report summarizes the investigative results from the 3 complete years of funding and corresponding publications are listed. The first year saw the verification of beam alignment for coherent Doppler lidar in space by using the surface return. The second year saw the analysis and computerized simulation of using heterodyne efficiency as an absolute measure of performance of coherent Doppler lidar. A new method was proposed to determine the estimation error for Doppler lidar wind measurements without the need for an independent wind measurement. Coherent Doppler lidar signal covariance, including wind shear and turbulence, was derived and calculated for typical atmospheric conditions. The effects of wind turbulence defined by Kolmogorov spatial statistics were investigated theoretically and with simulations. The third year saw the performance of coherent Doppler lidar in the weak signal regime determined by computer simulations using the best velocity estimators. Improved algorithms for extracting the performance of velocity estimators with wind turbulence included were also produced.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010011957','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010011957"><span>Unsteady Analysis of Separated Aerodynamic Flows Using an Unstructured Multigrid Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pelaez, Juan; Mavriplis, Dimitri J.; Kandil, Osama</p> <p>2001-01-01</p> <p>An implicit method for the computation of unsteady flows on unstructured grids is presented. The resulting nonlinear system of equations is solved at each time step using an agglomeration multigrid procedure. The method allows for arbitrarily large time steps and is efficient in terms of computational effort and storage. Validation of the code using a one-equation turbulence model is performed for the well-known case of flow over a cylinder. A Detached Eddy Simulation model is also implemented and its performance compared to the one equation Spalart-Allmaras Reynolds Averaged Navier-Stokes (RANS) turbulence model. Validation cases using DES and RANS include flow over a sphere and flow over a NACA 0012 wing including massive stall regimes. The project was driven by the ultimate goal of computing separated flows of aerodynamic interest, such as massive stall or flows over complex non-streamlined geometries.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhDT.......383C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhDT.......383C"><span>Data-driven modeling of hydroclimatic trends and soil moisture: Multi-scale data integration and decision support</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Coopersmith, Evan Joseph</p> <p></p> <p>The techniques and information employed for decision-making vary with the spatial and temporal scope of the assessment required. In modern agriculture, the farm owner or manager makes decisions on a day-to-day or even hour-to-hour basis for dozens of fields scattered over as much as a fifty-mile radius from some central location. Following precipitation events, land begins to dry. Land-owners and managers often trace serpentine paths of 150+ miles every morning to inspect the conditions of their various parcels. His or her objective lies in appropriate resource usage -- is a given tract of land dry enough to be workable at this moment or would he or she be better served waiting patiently? Longer-term, these owners and managers decide upon which seeds will grow most effectively and which crops will make their operations profitable. At even longer temporal scales, decisions are made regarding which fields must be acquired and sold and what types of equipment will be necessary in future operations. This work develops and validates algorithms for these shorter-term decisions, along with models of national climate patterns and climate changes to enable longer-term operational planning. A test site at the University of Illinois South Farms (Urbana, IL, USA) served as the primary location to validate machine learning algorithms, employing public sources of precipitation and potential evapotranspiration to model the wetting/drying process. In expanding such local decision support tools to locations on a national scale, one must recognize the heterogeneity of hydroclimatic and soil characteristics throughout the United States. Machine learning algorithms modeling the wetting/drying process must address this variability, and yet it is wholly impractical to construct a separate algorithm for every conceivable location. For this reason, a national hydrological classification system is presented, allowing clusters of hydroclimatic similarity to emerge naturally from annual regime curve data and facilitate the development of cluster-specific algorithms. Given the desire to enable intelligent decision-making at any location, this classification system is developed in a manner that will allow for classification anywhere in the U.S., even in an ungauged basin. Daily time series data from 428 catchments in the MOPEX database are analyzed to produce an empirical classification tree, partitioning the United States into regions of hydroclimatic similarity. In constructing a classification tree based upon 55 years of data, it is important to recognize the non-stationary nature of climate data. The shifts in climatic regimes will cause certain locations to shift their ultimate position within the classification tree, requiring decision-makers to alter land usage, farming practices, and equipment needs, and algorithms to adjust accordingly. This work adapts the classification model to address the issue of regime shifts over larger temporal scales and suggests how land-usage and farming protocol may vary from hydroclimatic shifts in decades to come. Finally, the generalizability of the hydroclimatic classification system is tested with a physically-based soil moisture model calibrated at several locations throughout the continental United States. The soil moisture model is calibrated at a given site and then applied with the same parameters at other sites within and outside the same hydroclimatic class. The model's performance deteriorates minimally if the calibration and validation location are within the same hydroclimatic class, but deteriorates significantly if the calibration and validates sites are located in different hydroclimatic classes. These soil moisture estimates at the field scale are then further refined by the introduction of LiDAR elevation data, distinguishing faster-drying peaks and ridges from slower-drying valleys. The inclusion of LiDAR enabled multiple locations within the same field to be predicted accurately despite non-identical topography. This cross-application of parametric calibrations and LiDAR-driven disaggregation facilitates decision-support at locations without proximally-located soil moisture sensors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4605733','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4605733"><span>Climatic and Landscape Influences on Fire Regimes from 1984 to 2010 in the Western United States</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liu, Zhihua; Wimberly, Michael C.</p> <p>2015-01-01</p> <p>An improved understanding of the relative influences of climatic and landscape controls on multiple fire regime components is needed to enhance our understanding of modern fire regimes and how they will respond to future environmental change. To address this need, we analyzed the spatio-temporal patterns of fire occurrence, size, and severity of large fires (> 405 ha) in the western United States from 1984–2010. We assessed the associations of these fire regime components with environmental variables, including short-term climate anomalies, vegetation type, topography, and human influences, using boosted regression tree analysis. Results showed that large fire occurrence, size, and severity each exhibited distinctive spatial and spatio-temporal patterns, which were controlled by different sets of climate and landscape factors. Antecedent climate anomalies had the strongest influences on fire occurrence, resulting in the highest spatial synchrony. In contrast, climatic variability had weaker influences on fire size and severity and vegetation types were the most important environmental determinants of these fire regime components. Topography had moderately strong effects on both fire occurrence and severity, and human influence variables were most strongly associated with fire size. These results suggest a potential for the emergence of novel fire regimes due to the responses of fire regime components to multiple drivers at different spatial and temporal scales. Next-generation approaches for projecting future fire regimes should incorporate indirect climate effects on vegetation type changes as well as other landscape effects on multiple components of fire regimes. PMID:26465959</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20050215490&hterms=advanced+performance+management&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dadvanced%2Bperformance%2Bmanagement','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20050215490&hterms=advanced+performance+management&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dadvanced%2Bperformance%2Bmanagement"><span>Advanced Health Management Algorithms for Crew Exploration Applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Davidson, Matt; Stephens, John; Jones, Judit</p> <p>2005-01-01</p> <p>Achieving the goals of the President's Vision for Exploration will require new and innovative ways to achieve reliability increases of key systems and sub-systems. The most prominent approach used in current systems is to maintain hardware redundancy. This imposes constraints to the system and utilizes weight that could be used for payload for extended lunar, Martian, or other deep space missions. A technique to improve reliability while reducing the system weight and constraints is through the use of an Advanced Health Management System (AHMS). This system contains diagnostic algorithms and decision logic to mitigate or minimize the impact of system anomalies on propulsion system performance throughout the powered flight regime. The purposes of the AHMS are to increase the probability of successfully placing the vehicle into the intended orbit (Earth, Lunar, or Martian escape trajectory), increase the probability of being able to safely execute an abort after it has developed anomalous performance during launch or ascent phases of the mission, and to minimize or mitigate anomalies during the cruise portion of the mission. This is accomplished by improving the knowledge of the state of the propulsion system operation at any given turbomachinery vibration protection logic and an overall system analysis algorithm that utilizes an underlying physical model and a wide array of engine system operational parameters to detect and mitigate predefined engine anomalies. These algorithms are generic enough to be utilized on any propulsion system yet can be easily tailored to each application by changing input data and engine specific parameters. The key to the advancement of such a system is the verification of the algorithms. These algorithms will be validated through the use of a database of nominal and anomalous performance from a large propulsion system where data exists for catastrophic and noncatastrophic propulsion sytem failures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20020016071&hterms=BIO&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3DBIO','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20020016071&hterms=BIO&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3DBIO"><span>OCTS and SeaWiFS Bio-Optical Algorithm and Product Validation and Intercomparison in US Coastal Waters. Chapter 5</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Brown, Christopher W.; Subramaniam, Ajit; Culver, Mary; Brock, John C.</p> <p>2001-01-01</p> <p>Monitoring the health of US coastal waters is an important goal of the National Oceanic and Atmospheric Administration (NOAA). Satellite sensors are capable of providing daily synoptic data of large expanses of the US coast. Ocean color sensors, in particular, can be used to monitor the water quality of coastal waters on an operational basis. To appraise the validity of satellite-derived measurements, such as chlorophyll concentration, the bio-optical algorithms used to derive them must be evaluated in coastal environments. Towards this purpose, over 21 cruises in diverse US coastal waters have been conducted. Of these 21 cruises, 12 have been performed in conjunction with and under the auspices of the NASA/Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project. The primary goal of these cruises has been to obtain in-situ measurements of downwelling irradiance, upwelling radiance, and chlorophyll concentrations in order to evaluate bio-optical algorithms that estimate chlorophyll concentration. In this Technical Memorandum, we evaluate the ability of five bio-optical algorithms, including the current Sea-Viewing Wide Field-of-view Sensor (SeaWiFS) algorithm, to estimate chlorophyll concentration in surface waters of the South Atlantic Bight (SAB). The SAB consists of a variety of environments including coastal and continental shelf regimes, Gulf Stream waters, and the Sargasso Sea. The biological and optical characteristics of the region is complicated by temporal and spatial variability in phytoplankton composition, primary productivity, and the concentrations of colored dissolved organic matter (CDOM) and suspended sediment. As such, the SAB is an ideal location to test the robustness of algorithms for coastal use.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29879530','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29879530"><span>Optimization of radioactive sources to achieve the highest precision in three-phase flow meters using Jaya algorithm.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Roshani, G H; Karami, A; Khazaei, A; Olfateh, A; Nazemi, E; Omidi, M</p> <p>2018-05-17</p> <p>Gamma ray source has very important role in precision of multi-phase flow metering. In this study, different combination of gamma ray sources (( 133 Ba- 137 Cs), ( 133 Ba- 60 Co), ( 241 Am- 137 Cs), ( 241 Am- 60 Co), ( 133 Ba- 241 Am) and ( 60 Co- 137 Cs)) were investigated in order to optimize the three-phase flow meter. Three phases were water, oil and gas and the regime was considered annular. The required data was numerically generated using MCNP-X code which is a Monte-Carlo code. Indeed, the present study devotes to forecast the volume fractions in the annular three-phase flow, based on a multi energy metering system including various radiation sources and also one NaI detector, using a hybrid model of artificial neural network and Jaya Optimization algorithm. Since the summation of volume fractions is constant, a constraint modeling problem exists, meaning that the hybrid model must forecast only two volume fractions. Six hybrid models associated with the number of used radiation sources are designed. The models are employed to forecast the gas and water volume fractions. The next step is to train the hybrid models based on numerically obtained data. The results show that, the best forecast results are obtained for the gas and water volume fractions of the system including the ( 241 Am- 137 Cs) as the radiation source. Copyright © 2018 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20070019357','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20070019357"><span>Skin Temperature Analysis and Bias Correction in a Coupled Land-Atmosphere Data Assimilation System</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bosilovich, Michael G.; Radakovich, Jon D.; daSilva, Arlindo; Todling, Ricardo; Verter, Frances</p> <p>2006-01-01</p> <p>In an initial investigation, remotely sensed surface temperature is assimilated into a coupled atmosphere/land global data assimilation system, with explicit accounting for biases in the model state. In this scheme, an incremental bias correction term is introduced in the model's surface energy budget. In its simplest form, the algorithm estimates and corrects a constant time mean bias for each gridpoint; additional benefits are attained with a refined version of the algorithm which allows for a correction of the mean diurnal cycle. The method is validated against the assimilated observations, as well as independent near-surface air temperature observations. In many regions, not accounting for the diurnal cycle of bias caused degradation of the diurnal amplitude of background model air temperature. Energy fluxes collected through the Coordinated Enhanced Observing Period (CEOP) are used to more closely inspect the surface energy budget. In general, sensible heat flux is improved with the surface temperature assimilation, and two stations show a reduction of bias by as much as 30 Wm(sup -2) Rondonia station in Amazonia, the Bowen ratio changes direction in an improvement related to the temperature assimilation. However, at many stations the monthly latent heat flux bias is slightly increased. These results show the impact of univariate assimilation of surface temperature observations on the surface energy budget, and suggest the need for multivariate land data assimilation. The results also show the need for independent validation data, especially flux stations in varied climate regimes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10445E..1PP','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10445E..1PP"><span>Theoretical basis, principles of design, and experimental study of the prototype of perfect AFCS transmitting signals without coding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Platonov, A.; Zaitsev, Ie.; Opalski, L. J.</p> <p>2017-08-01</p> <p>The paper presents an overview of design methodology and results of experiments with a Prototype of highly efficient optimal adaptive feedback communication systems (AFCS), transmitting low frequency analog signals without coding. The paper emphasizes the role of the forward transmitter saturation as the factor that blocked implementation of theoretical results of pioneer (1960-1970s) and later research on FCS. Deepened analysis of the role of statistical fitting condition in adequate formulation and solution of AFCS optimization task is given. Solution of the task - optimal transmission/reception algorithms is presented in the form useful for elaboration of the hardware/software Prototype. A notable particularity of the Prototype is absence of the encoding/decoding units, whose functions are realized by the adaptive pulse amplitude modulator (PAM) of the forward transmitter (FT) and estimating/controlling algorithm in the receiver of base station (BS). Experiments confirm that the Prototype transmits signals from FT to BS "perfectly": with the bit rate equal to the capacity of the system, and with limit energy [J/bit] and spectral [bps/Hz] efficiency. Another, not less important and confirmed experimentally, particularity of AFCS is its capability to adjust parameters of FT and BS to the characteristics of scenario of application and maintain the ideal regime of transmission including spectralenergy efficiency. AFCS adjustment can be made using BS estimates of mean square error (MSE). The concluding part of the paper contains discussion of the presented results, stressing capability of AFCS to solve problems appearing in development of dense wireless networks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1882b0020G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1882b0020G"><span>Mathematical model of a rotational bioreactor for the dynamic cultivation of scaffold-adhered human mesenchymal stem cells for bone regeneration</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ganimedov, V. L.; Papaeva, E. O.; Maslov, N. A.; Larionov, P. M.</p> <p>2017-09-01</p> <p>Development of cell-mediated scaffold technologies for the treatment of critical bone defects is very important for the purpose of reparative bone regeneration. Today the properties of the bioreactor for cell-seeded scaffold cultivation are the subject of intensive research. We used the mathematical modeling of rotational reactor and construct computational algorithm with the help of ANSYS software package to develop this new procedure. The solution obtained with the help of the constructed computational algorithm is in good agreement with the analytical solution of Couette for the task of two coaxial cylinders. The series of flow computations for different rotation frequencies (1, 0.75, 0.5, 0.33, 1.125 Hz) was performed for the laminar flow regime approximation with the help of computational algorithm. It was found that Taylor vortices appear in the annular gap between the cylinders in a simulated bioreactor. It was obtained that shear stress in the range of interest (0.002-0.1 Pa) arise on outer surface of inner cylinder when it rotates with the frequency not exceeding 0.8 Hz. So the constructed mathematical model and the created computational algorithm for calculating the flow parameters allow predicting the shear stress and pressure values depending on the rotation frequency and geometric parameters, as well as optimizing the operating mode of the bioreactor.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvL.119v0503C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvL.119v0503C"><span>Optimal Quantum Spatial Search on Random Temporal Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chakraborty, Shantanav; Novo, Leonardo; Di Giorgio, Serena; Omar, Yasser</p> <p>2017-12-01</p> <p>To investigate the performance of quantum information tasks on networks whose topology changes in time, we study the spatial search algorithm by continuous time quantum walk to find a marked node on a random temporal network. We consider a network of n nodes constituted by a time-ordered sequence of Erdös-Rényi random graphs G (n ,p ), where p is the probability that any two given nodes are connected: After every time interval τ , a new graph G (n ,p ) replaces the previous one. We prove analytically that, for any given p , there is always a range of values of τ for which the running time of the algorithm is optimal, i.e., O (√{n }), even when search on the individual static graphs constituting the temporal network is suboptimal. On the other hand, there are regimes of τ where the algorithm is suboptimal even when each of the underlying static graphs are sufficiently connected to perform optimal search on them. From this first study of quantum spatial search on a time-dependent network, it emerges that the nontrivial interplay between temporality and connectivity is key to the algorithmic performance. Moreover, our work can be extended to establish high-fidelity qubit transfer between any two nodes of the network. Overall, our findings show that one can exploit temporality to achieve optimal quantum information tasks on dynamical random networks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018NucFu..58e6019W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018NucFu..58e6019W"><span>Achievement of radiative feedback control for long-pulse operation on EAST</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wu, K.; Yuan, Q. P.; Xiao, B. J.; Wang, L.; Duan, Y. M.; Chen, J. B.; Zheng, X. W.; Liu, X. J.; Zhang, B.; Xu, J. C.; Luo, Z. P.; Zang, Q.; Li, Y. Y.; Feng, W.; Wu, J. H.; Yang, Z. S.; Zhang, L.; Luo, G.-N.; Gong, X. Z.; Hu, L. Q.; Hu, J. S.; Li, J.</p> <p>2018-05-01</p> <p>The active feedback control of radiated power to prevent divertor target plates overheating during long-pulse operation has been developed and implemented on EAST. The radiation control algorithm, with impurity seeding via a supersonic molecular beam injection (SMBI) system, has shown great success in both reliability and stability. By seeding a sequence of short neon (Ne) impurity pulses with the SMBI from the outer mid-plane, the radiated power of the bulk plasma can be well controlled, and the duration of radiative control (feedforward and feedback) is 4.5 s during a discharge of 10 s. Reliable control of the total radiated power of bulk plasma has been successfully achieved in long-pulse upper single null (USN) discharges with a tungsten divertor. The achieved control range of {{f}rad} is 20%–30% in L-mode regimes and 18%–36% in H-mode regimes. The temperature of the divertor target plates was maintained at a low level during the radiative control phase. The peak particle flux on the divertor target was decreased by feedforward Ne injection in the L-mode discharges, while the Ne pulses from the SMBI had no influence on the peak particle flux because of the very small injecting volume. It is shown that although the radiated power increased, no serious reduction of plasma-stored energy or confinement was observed during the control phase. The success of the radiation control algorithm and current experiments in radiated power control represents a significant advance for steady-state divertor radiation and heat flux control on EAST for near-future long-pulse operation.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28293137','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28293137"><span>Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Perdikaris, P; Raissi, M; Damianou, A; Lawrence, N D; Karniadakis, G E</p> <p>2017-02-01</p> <p>Multi-fidelity modelling enables accurate inference of quantities of interest by synergistically combining realizations of low-cost/low-fidelity models with a small set of high-fidelity observations. This is particularly effective when the low- and high-fidelity models exhibit strong correlations, and can lead to significant computational gains over approaches that solely rely on high-fidelity models. However, in many cases of practical interest, low-fidelity models can only be well correlated to their high-fidelity counterparts for a specific range of input parameters, and potentially return wrong trends and erroneous predictions if probed outside of their validity regime. Here we put forth a probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends. This introduces a new class of multi-fidelity information fusion algorithms that provide a fundamental extension to the existing linear autoregressive methodologies, while still maintaining the same algorithmic complexity and overall computational cost. The performance of the proposed methods is tested in several benchmark problems involving both synthetic and real multi-fidelity datasets from computational fluid dynamics simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhDT.......201R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhDT.......201R"><span>Modeling, Control, and Estimation of Flexible, Aerodynamic Structures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ray, Cody W.</p> <p></p> <p>Engineers have long been inspired by nature’s flyers. Such animals navigate complex environments gracefully and efficiently by using a variety of evolutionary adaptations for high-performance flight. Biologists have discovered a variety of sensory adaptations that provide flow state feedback and allow flying animals to feel their way through flight. A specialized skeletal wing structure and plethora of robust, adaptable sensory systems together allow nature’s flyers to adapt to myriad flight conditions and regimes. In this work, motivated by biology and the successes of bio-inspired, engineered aerial vehicles, linear quadratic control of a flexible, morphing wing design is investigated, helping to pave the way for truly autonomous, mission-adaptive craft. The proposed control algorithm is demonstrated to morph a wing into desired positions. Furthermore, motivated specifically by the sensory adaptations organisms possess, this work transitions to an investigation of aircraft wing load identification using structural response as measured by distributed sensors. A novel, recursive estimation algorithm is utilized to recursively solve the inverse problem of load identification, providing both wing structural and aerodynamic states for use in a feedback control, mission-adaptive framework. The recursive load identification algorithm is demonstrated to provide accurate load estimate in both simulation and experiment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011IzVF...54j..31R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011IzVF...54j..31R"><span>Algorithmic and software for definition the chaotic parameter MEGNO in the problems of asteroids dynamics. (Russian Title: Алгоритмическое и программное обеспечение для определения параметра MEGNO в задачах динамики астероидов)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Razdymakhina, O. N.</p> <p>2011-07-01</p> <p>In the paper description of algorithm and program of definition of average MEGNO parameter for asteroids is presented. This program is developed in an environment of parallel programming on the cluster "SKIF Cyberia". The parameter was determined by combined integration of motion equations of the asteroid, equations of variation and two equations of MEGNO parameters. The choice of the algorithm is explained by the fact that method of definition of average MEGNO parameter allows us to specify the boundary of the transition from a regular regime of asteroid motion to chaotic one. A testing of the program was conducted at several objects with a different character of the motion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1430258','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1430258"><span>Understanding the tropical cloud feedback from an analysis of the circulation and stability regimes simulated from an upgraded multiscale modeling framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Xu, Kuan-Man; Cheng, Anning</p> <p></p> <p>As revealed from studies using conventional general circulation models (GCMs), the thermodynamic contribution to the tropical cloud feedback dominates the dynamic contribution, but these models have difficulty in simulating the subsidence regimes in the tropics. In this study, we analyze the tropical cloud feedback from a 2 K sea surface temperature (SST) perturbation experiment performed with a multiscale modeling framework (MMF). The MMF explicitly represents cloud processes using 2-D cloud-resolving models with an advanced higher-order turbulence closure in each atmospheric column of the host GCM. We sort the monthly mean cloud properties and cloud radiative effects according to circulation andmore » stability regimes. Here, we find that the regime-sorted dynamic changes dominate the thermodynamic changes in terms of the absolute magnitude. The dynamic changes in the weak subsidence regimes exhibit strong negative cloud feedback due to increases in shallow cumulus and deep clouds while those in strongly convective and moderate-to-strong subsidence regimes have opposite signs, resulting in a small contribution to cloud feedback. On the other hand, the thermodynamic changes are large due to decreases in stratocumulus clouds in the moderate-to-strong subsidence regimes with small opposite changes in the weak subsidence and strongly convective regimes, resulting in a relatively large contribution to positive cloud feedback. The dynamic and thermodynamic changes contribute equally to positive cloud feedback and are relatively insensitive to stability in the moderate-to-strong subsidence regimes. But they are sensitive to stability changes from the SST increase in convective and weak subsidence regimes. Lastly, these results have implications for interpreting cloud feedback mechanisms.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1430258-understanding-tropical-cloud-feedback-from-analysis-circulation-stability-regimes-simulated-from-upgraded-multiscale-modeling-framework','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1430258-understanding-tropical-cloud-feedback-from-analysis-circulation-stability-regimes-simulated-from-upgraded-multiscale-modeling-framework"><span>Understanding the tropical cloud feedback from an analysis of the circulation and stability regimes simulated from an upgraded multiscale modeling framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Xu, Kuan-Man; Cheng, Anning</p> <p>2016-11-15</p> <p>As revealed from studies using conventional general circulation models (GCMs), the thermodynamic contribution to the tropical cloud feedback dominates the dynamic contribution, but these models have difficulty in simulating the subsidence regimes in the tropics. In this study, we analyze the tropical cloud feedback from a 2 K sea surface temperature (SST) perturbation experiment performed with a multiscale modeling framework (MMF). The MMF explicitly represents cloud processes using 2-D cloud-resolving models with an advanced higher-order turbulence closure in each atmospheric column of the host GCM. We sort the monthly mean cloud properties and cloud radiative effects according to circulation andmore » stability regimes. Here, we find that the regime-sorted dynamic changes dominate the thermodynamic changes in terms of the absolute magnitude. The dynamic changes in the weak subsidence regimes exhibit strong negative cloud feedback due to increases in shallow cumulus and deep clouds while those in strongly convective and moderate-to-strong subsidence regimes have opposite signs, resulting in a small contribution to cloud feedback. On the other hand, the thermodynamic changes are large due to decreases in stratocumulus clouds in the moderate-to-strong subsidence regimes with small opposite changes in the weak subsidence and strongly convective regimes, resulting in a relatively large contribution to positive cloud feedback. The dynamic and thermodynamic changes contribute equally to positive cloud feedback and are relatively insensitive to stability in the moderate-to-strong subsidence regimes. But they are sensitive to stability changes from the SST increase in convective and weak subsidence regimes. Lastly, these results have implications for interpreting cloud feedback mechanisms.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JASMS.tmp...35B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JASMS.tmp...35B"><span>Ion-neutral Clustering of Bile Acids in Electrospray Ionization Across UPLC Flow Regimes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Brophy, Patrick; Broeckling, Corey D.; Murphy, James; Prenni, Jessica E.</p> <p>2018-02-01</p> <p>Bile acid authentic standards were used as model compounds to quantitatively evaluate complex in-source phenomenon on a UPLC-ESI-TOF-MS operated in the negative mode. Three different diameter columns and a ceramic-based microfluidic separation device were utilized, allowing for detailed descriptions of bile acid behavior across a wide range of flow regimes and instantaneous concentrations. A custom processing algorithm based on correlation analysis was developed to group together all ion signals arising from a single compound; these grouped signals produce verified compound spectra for each bile acid at each on-column mass loading. Significant adduction was observed for all bile acids investigated under all flow regimes and across a wide range of bile acid concentrations. The distribution of bile acid containing clusters was found to depend on the specific bile acid species, solvent flow rate, and bile acid concentration. Relative abundancies of each cluster changed non-linearly with concentration. It was found that summing all MS level (low collisional energy) ions and ion-neutral adducts arising from a single compound improves linearity across the concentration range (0.125-5 ng on column) and increases the sensitivity of MS level quantification. The behavior of each cluster roughly follows simple equilibrium processes consistent with our understanding of electrospray ionization mechanisms and ion transport processes occurring in atmospheric pressure interfaces. [Figure not available: see fulltext.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23659239','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23659239"><span>Oral health and welfare state regimes: a cross-national analysis of European countries.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guarnizo-Herreño, Carol C; Tsakos, Georgios; Sheiham, Aubrey; Watt, Richard G</p> <p>2013-06-01</p> <p>Very little is known about the potential relationship between welfare state regimes and oral health. This study assessed the oral health of adults in a range of European countries clustered by welfare regimes according to Ferrera's typology and the complementary Eastern type. We analysed data from Eurobarometer wave 72.3, a cross-sectional survey of 31 European countries carried out in 2009. We evaluated three self-reported oral health outcomes: edentulousness, no functional dentition (<20 natural teeth), and oral impacts on daily living. Age-standardized prevalence rates were estimated for each country and for each welfare state regime. The Scandinavian regime showed lower prevalence rates for all outcomes. For edentulousness and no functional dentition, there were higher prevalence rates in the Eastern regime but no significant differences between Anglo-Saxon, Bismarckian, and Southern regimes. The Southern regime presented a higher prevalence of oral impacts on daily living. Results by country indicated that Sweden had the lowest prevalences for edentulousness and no functional dentition, and Denmark had the lowest prevalence for oral impacts. The results suggest that Scandinavian welfare states, with more redistributive and universal welfare policies, had better population oral health. Future research should provide further insights about the potential mechanisms through which welfare-state regimes would influence oral health. © 2013 Eur J Oral Sci.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4255683','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4255683"><span>Oral health and welfare state regimes: a cross-national analysis of European countries</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Guarnizo-Herreño, Carol C; Tsakos, Georgios; Sheiham, Aubrey; Watt, Richard G</p> <p>2013-01-01</p> <p>Very little is known about the potential relationship between welfare state regimes and oral health. This study assessed the oral health of adults in a range of European countries clustered by welfare regimes according to Ferrera's typology and the complementary Eastern type. We analysed data from Eurobarometer wave 72.3, a cross-sectional survey of 31 European countries carried out in 2009. We evaluated three self-reported oral health outcomes: edentulousness, no functional dentition (<20 natural teeth), and oral impacts on daily living. Age-standardized prevalence rates were estimated for each country and for each welfare state regime. The Scandinavian regime showed lower prevalence rates for all outcomes. For edentulousness and no functional dentition, there were higher prevalence rates in the Eastern regime but no significant differences between Anglo-Saxon, Bismarckian, and Southern regimes. The Southern regime presented a higher prevalence of oral impacts on daily living. Results by country indicated that Sweden had the lowest prevalences for edentulousness and no functional dentition, and Denmark had the lowest prevalence for oral impacts. The results suggest that Scandinavian welfare states, with more redistributive and universal welfare policies, had better population oral health. Future research should provide further insights about the potential mechanisms through which welfare-state regimes would influence oral health. PMID:23659239</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1424974-height-dependency-aerosol-cloud-interaction-regimes-height-dependency-aci-regime','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1424974-height-dependency-aerosol-cloud-interaction-regimes-height-dependency-aci-regime"><span>Height Dependency of Aerosol-Cloud Interaction Regimes: Height Dependency of ACI Regime</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Chen, Jingyi; Liu, Yangang; Zhang, Minghua</p> <p></p> <p>This study investigates the height dependency of aerosol-cloud interaction regimes in terms of the joint dependence of the key cloud microphysical properties (e.g. cloud droplet number concentration, cloud droplet relative dispersion, etc.) on aerosol number concentration (N a) and vertical velocity (w). The three distinct regimes with different microphysical features are the aerosol-limited regime, the updraft-limited regime, and the transitional regime. The results reveal two new phenomena in updraft-limited regime: 1) The “condensational broadening” of cloud droplet size distribution in contrast to the well-known “condensational narrowing” in the aerosol-limited regime; 2) Above the level of maximum supersaturation, some cloud dropletsmore » are deactivated into interstitial aerosols in the updraft-limited regime whereas all droplets remain activated in the aerosol-limited regime. Further analysis shows that the particle equilibrium supersaturation plays important role in understanding these unique features. Also examined is the height of warm rain initiation and its dependence on N a and w. The rain initiation height is found to depend primarily on either N a or w or both in different N a-w regimes, thus suggesting a strong regime dependence of the second aerosol indirect effect.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1424974-height-dependency-aerosol-cloud-interaction-regimes-height-dependency-aci-regime','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1424974-height-dependency-aerosol-cloud-interaction-regimes-height-dependency-aci-regime"><span>Height Dependency of Aerosol-Cloud Interaction Regimes: Height Dependency of ACI Regime</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Chen, Jingyi; Liu, Yangang; Zhang, Minghua; ...</p> <p>2018-01-10</p> <p>This study investigates the height dependency of aerosol-cloud interaction regimes in terms of the joint dependence of the key cloud microphysical properties (e.g. cloud droplet number concentration, cloud droplet relative dispersion, etc.) on aerosol number concentration (N a) and vertical velocity (w). The three distinct regimes with different microphysical features are the aerosol-limited regime, the updraft-limited regime, and the transitional regime. The results reveal two new phenomena in updraft-limited regime: 1) The “condensational broadening” of cloud droplet size distribution in contrast to the well-known “condensational narrowing” in the aerosol-limited regime; 2) Above the level of maximum supersaturation, some cloud dropletsmore » are deactivated into interstitial aerosols in the updraft-limited regime whereas all droplets remain activated in the aerosol-limited regime. Further analysis shows that the particle equilibrium supersaturation plays important role in understanding these unique features. Also examined is the height of warm rain initiation and its dependence on N a and w. The rain initiation height is found to depend primarily on either N a or w or both in different N a-w regimes, thus suggesting a strong regime dependence of the second aerosol indirect effect.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.8176E..1ZA','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.8176E..1ZA"><span>Monitoring urban land cover with the use of satellite remote sensing techniques as a means of flood risk assessment in Cyprus</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alexakis, Dimitris; Hadjimitsis, Diofantos; Agapiou, Athos; Themistocleous, Kyriacos; Retalis, Adrianos</p> <p>2011-11-01</p> <p>The increase of flood inundation occuring in different regions all over the world have enhanced the need for effective flood risk management. As floods frequency is increasing with a steady rate due to ever increasing human activities on physical floodplains there is a respectively increasing of financial destructive impact of floods. A flood can be determined as a mass of water that produces runoff on land that is not normally covered by water. However, earth observation techniques such as satellite remote sensing can contribute toward a more efficient flood risk mapping according to EU Directives of 2007/60. This study strives to highlight the need of digital mapping of urban sprawl in a catchment area in Cyprus and the assessment of its contribution to flood risk. The Yialias river (Nicosia, Cyprus) was selected as case study where devastating flash floods events took place at 2003 and 2009. In order to search the diachronic land cover regime of the study area multi-temporal satellite imagery was processed and analyzed (e.g Landsat TMETM+, Aster). The land cover regime was examined in detail by using sophisticated post-processing classification algorithms such as Maximum Likelihood, Parallelepiped Algorithm, Minimum Distance, Spectral Angle and Isodata. Texture features were calculated using the Grey Level Co-Occurence Matrix. In addition three classification techniques were compared : multispectral classification, texture based classification and a combination of both. The classification products were compared and evaluated for their accuracy. Moreover, a knowledge-rule method is proposed based on spectral, texture and shape features in order to create efficient land use and land cover maps of the study area. Morphometric parameters such as stream frequency, drainage density and elongation ratio were calculated in order to extract the basic watershed characteristics. In terms of the impacts of land use/cover on flooding, GIS and Fragstats tool were used to detect identifying trends, both visually and statistically, resulting from land use changes in a flood prone area such as Yialias by the use of spatial metrics. The results indicated that there is a considerable increase of urban areas cover during the period of the last 30 years. All these denoted that one of the main driving force of the increasing flood risk in catchment areas in Cyprus is generally associated to human activities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24463852','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24463852"><span>Spatially dynamic forest management to sustain biodiversity and economic returns.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mönkkönen, Mikko; Juutinen, Artti; Mazziotta, Adriano; Miettinen, Kaisa; Podkopaev, Dmitry; Reunanen, Pasi; Salminen, Hannu; Tikkanen, Olli-Pekka</p> <p>2014-02-15</p> <p>Production of marketed commodities and protection of biodiversity in natural systems often conflict and thus the continuously expanding human needs for more goods and benefits from global ecosystems urgently calls for strategies to resolve this conflict. In this paper, we addressed what is the potential of a forest landscape to simultaneously produce habitats for species and economic returns, and how the conflict between habitat availability and timber production varies among taxa. Secondly, we aimed at revealing an optimal combination of management regimes that maximizes habitat availability for given levels of economic returns. We used multi-objective optimization tools to analyze data from a boreal forest landscape consisting of about 30,000 forest stands simulated 50 years into future. We included seven alternative management regimes, spanning from the recommended intensive forest management regime to complete set-aside of stands (protection), and ten different taxa representing a wide variety of habitat associations and social values. Our results demonstrate it is possible to achieve large improvements in habitat availability with little loss in economic returns. In general, providing dead-wood associated species with more habitats tended to be more expensive than providing requirements for other species. No management regime alone maximized habitat availability for the species, and systematic use of any single management regime resulted in considerable reductions in economic returns. Compared with an optimal combination of management regimes, a consistent application of the recommended management regime would result in 5% reduction in economic returns and up to 270% reduction in habitat availability. Thus, for all taxa a combination of management regimes was required to achieve the optimum. Refraining from silvicultural thinnings on a proportion of stands should be considered as a cost-effective management in commercial forests to reconcile the conflict between economic returns and habitat required by species associated with dead-wood. In general, a viable strategy to maintain biodiversity in production landscapes would be to diversify management regimes. Our results emphasize the importance of careful landscape level forest management planning because optimal combinations of management regimes were taxon-specific. For cost-efficiency, the results call for balanced and correctly targeted strategies among habitat types. Copyright © 2013 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003PhDT.......143C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003PhDT.......143C"><span>Monte-Carlo computation of turbulent premixed methane/air ignition</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Carmen, Christina Lieselotte</p> <p></p> <p>The present work describes the results obtained by a time dependent numerical technique that simulates the early flame development of a spark-ignited premixed, lean, gaseous methane/air mixture with the unsteady spherical flame propagating in homogeneous and isotropic turbulence. The algorithm described is based upon a sub-model developed by an international automobile research and manufacturing corporation in order to analyze turbulence conditions within internal combustion engines. Several developments and modifications to the original algorithm have been implemented including a revised chemical reaction scheme and the evaluation and calculation of various turbulent flame properties. Solution of the complete set of Navier-Stokes governing equations for a turbulent reactive flow is avoided by reducing the equations to a single transport equation. The transport equation is derived from the Navier-Stokes equations for a joint probability density function, thus requiring no closure assumptions for the Reynolds stresses. A Monte-Carlo method is also utilized to simulate phenomena represented by the probability density function transport equation by use of the method of fractional steps. Gaussian distributions of fluctuating velocity and fuel concentration are prescribed. Attention is focused on the evaluation of the three primary parameters that influence the initial flame kernel growth-the ignition system characteristics, the mixture composition, and the nature of the flow field. Efforts are concentrated on the effects of moderate to intense turbulence on flames within the distributed reaction zone. Results are presented for lean conditions with the fuel equivalence ratio varying from 0.6 to 0.9. The present computational results, including flame regime analysis and the calculation of various flame speeds, provide excellent agreement with results obtained by other experimental and numerical researchers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.9193P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.9193P"><span>Ground Validation Assessments of GPM Core Observatory Science Requirements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Petersen, Walt; Huffman, George; Kidd, Chris; Skofronick-Jackson, Gail</p> <p>2017-04-01</p> <p>NASA Global Precipitation Measurement (GPM) Mission science requirements define specific measurement error standards for retrieved precipitation parameters such as rain rate, raindrop size distribution, and falling snow detection on instantaneous temporal scales and spatial resolutions ranging from effective instrument fields of view [FOV], to grid scales of 50 km x 50 km. Quantitative evaluation of these requirements intrinsically relies on GPM precipitation retrieval algorithm performance in myriad precipitation regimes (and hence, assumptions related to physics) and on the quality of ground-validation (GV) data being used to assess the satellite products. We will review GPM GV products, their quality, and their application to assessing GPM science requirements, interleaving measurement and precipitation physical considerations applicable to the approaches used. Core GV data products used to assess GPM satellite products include 1) two minute and 30-minute rain gauge bias-adjusted radar rain rate products and precipitation types (rain/snow) adapted/modified from the NOAA/OU multi-radar multi-sensor (MRMS) product over the continental U.S.; 2) Polarimetric radar estimates of rain rate over the ocean collected using the K-Pol radar at Kwajalein Atoll in the Marshall Islands and the Middleton Island WSR-88D radar located in the Gulf of Alaska; and 3) Multi-regime, field campaign and site-specific disdrometer-measured rain/snow size distribution (DSD), phase and fallspeed information used to derive polarimetric radar-based DSD retrievals and snow water equivalent rates (SWER) for comparison to coincident GPM-estimated DSD and precipitation rates/types, respectively. Within the limits of GV-product uncertainty we demonstrate that the GPM Core satellite meets its basic mission science requirements for a variety of precipitation regimes. For the liquid phase, we find that GPM radar-based products are particularly successful in meeting bias and random error requirements associated with retrievals of rain rate and required +/- 0.5 millimeter error bounds for mass-weighted mean drop diameter. Version-04 (V4) GMI GPROF radiometer-based rain rate products exhibit reasonable agreement with GV, but do not completely meet mission science requirements over the continental U.S. for lighter rain rates (e.g., 1 mm/hr) due to excessive random error ( 75%). Importantly, substantial corrections were made to the V4 GPROF algorithm and preliminary analysis of Version 5 (V5) rain products indicates more robust performance relative to GV. For the frozen phase and a modest GPM requirement to "demonstrate detection of snowfall", DPR products do successfully identify snowfall within the sensitivity and beam sampling limits of the DPR instrument ( 12 dBZ lower limit; lowest clutter-free bins). Similarly, the GPROF algorithm successfully "detects" falling snow and delineates it from liquid precipitation. However, the GV approach to computing falling-snow "detection" statistics is intrinsically tied to GPROF Bayesian algorithm-based thresholds of precipitation "detection" and model analysis temperature, and is not sufficiently tied to SWER. Hence we will also discuss ongoing work to establish the lower threshold SWER for "detection" using combined GV radar, gauge and disdrometer-based case studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.1075K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.1075K"><span>Genetic analysis of seasonal runoff based on automatic techniques of hydrometeorological data processing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kireeva, Maria; Sazonov, Alexey; Rets, Ekaterina; Ezerova, Natalia; Frolova, Natalia; Samsonov, Timofey</p> <p>2017-04-01</p> <p>Detection of the rivers' feeding type is a complex and multifactor task. Such partitioning should be based, on the one hand, on the genesis of the feeding water, on the other hand, on its physical path. At the same time it should consider relationship of the feeding type with corresponding phase of the water regime. Due to the above difficulties and complexity of the approach, there are many different variants of separation of flow hydrograph for feeding types. The most common method is extraction of so called basic component which in one way or another reflects groundwater feeding of the river. In this case, the selection most often is based on the principle of local minima or graphic separation of this component. However, in this case neither origin of the water nor corresponding phase of water regime is considered. In this paper, the authors offer a method of complex automated analysis of genetic components of the river's feeding together with the separation of specific phases of the water regime. The objects of the study are medium and large rivers of European Russia having a pronounced spring flood, formed due to melt water, and summer-autumn and winter low water which is periodically interrupted by rain or thaw flooding. The method is based on genetic separation of hydrograph proposed in 1960s years by B. I. Kudelin. This technique is considered for large rivers having hydraulic connection with groundwater horizons during flood. For better detection of floods genesis the analysis involves reanalysis data on temperature and precipitation. Separation is based on the following fundamental graphic-analytical principles: • Ground feeding during the passage of flood peak tends to zero • Beginning of the flood is determined as the exceeding of critical value of low water discharge • Flood periods are determined on the basis of exceeding the critical low-water discharge; they relate to thaw in case of above-zero temperatures • During thaw and rain floods, ground feeding is determined using interpolation of values before and after the flood • Floods during the rise and fall of high water are determined using depletion curves plotting • Groundwater component of runoff is divided into dynamic and static parts. The algorithm of subdivision described was formalized in the form of a program code in Fortran, with the connection of additional modules of R-Studio. The use of two languages allows, on the one hand, to speed up the processing of a large array of daily water discharges, on the other hand, to facilitate visualization and interpretation of results. The algorithm includes the selection of 15 calibration parameters describing the characteristics of each watershed. Verification and calibration of the program was carried out for 20 rivers of European Russia. According to calculations, there is a significant increase in the groundwater flow component in the most part of watershed and an increase in the role of flooding as the phase of the water regime as a whole. This research was supported by Russian Foundation for Basic Research (contract No. 16-35-60080).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFM.H43D1274H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFM.H43D1274H"><span>Modeling Food Delivery Dynamics For Juvenile Salmonids Under Variable Flow Regimes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Harrison, L.; Utz, R.; Anderson, K.; Nisbet, R.</p> <p>2010-12-01</p> <p>Traditional approaches for assessing instream flow needs for salmonids have typically focused on the importance of physical habitat in determining fish habitat selection. This somewhat simplistic approach does not account for differences in food delivery rates to salmonids that arise due to spatial variability in river morphology, hydraulics and temporal variations in the flow regime. Explicitly linking how changes in the flow regime influences food delivery dynamics is an important step in advancing process-based bioenergetic models that seek to predict growth rates of salmonids across various life-stages. Here we investigate how food delivery rates for juvenile salmonids vary both spatially and with flow magnitude in a meandering reach of the Merced River, CA. We utilize a two-dimensional (2D) hydrodynamic model and discrete particle tracking algorithm to simulate invertebrate drift transport rates at baseflow and a near-bankfull discharge. Modeling results indicate that at baseflow, the maximum drift density occurs in the channel thalweg, while drift densities decrease towards the channel margins due to the process of organisms settling out of the drift. During high-flow events, typical of spring dam-releases, the invertebrate drift transport pathway follows a similar trajectory along the high velocity core and the drift concentrations are greatest in the channel centerline, though the zone of invertebrate transport occupies a greater fraction of the channel width. Based on invertebrate supply rates alone, feeding juvenile salmonids would be expected to be distributed down the channel centerline where the maximum predicted food delivery rates are located in this reach. However, flow velocities in these channel sections are beyond maximum sustainable swimming speeds for most juvenile salmonids. Our preliminary findings suggest that a lack of low velocity refuge may prevent juvenile salmonids from deriving energy from the areas with maximum drift density in this reach. Future efforts will focus on integration of food delivery and bioenergetic models to account for conflicting demands of maximizing food intake while minimizing the energetic costs of swimming.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3895036','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3895036"><span>Optimization of Landscape Services under Uncoordinated Management by Multiple Landowners</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Porto, Miguel; Correia, Otília; Beja, Pedro</p> <p>2014-01-01</p> <p>Landscapes are often patchworks of private properties, where composition and configuration patterns result from cumulative effects of the actions of multiple landowners. Securing the delivery of services in such multi-ownership landscapes is challenging, because it is difficult to assure tight compliance to spatially explicit management rules at the level of individual properties, which may hinder the conservation of critical landscape features. To deal with these constraints, a multi-objective simulation-optimization procedure was developed to select non-spatial management regimes that best meet landscape-level objectives, while accounting for uncoordinated and uncertain response of individual landowners to management rules. Optimization approximates the non-dominated Pareto frontier, combining a multi-objective genetic algorithm and a simulator that forecasts trends in landscape pattern as a function of management rules implemented annually by individual landowners. The procedure was demonstrated with a case study for the optimum scheduling of fuel treatments in cork oak forest landscapes, involving six objectives related to reducing management costs (1), reducing fire risk (3), and protecting biodiversity associated with mid- and late-successional understories (2). There was a trade-off between cost, fire risk and biodiversity objectives, that could be minimized by selecting management regimes involving ca. 60% of landowners clearing the understory at short intervals (around 5 years), and the remaining managing at long intervals (ca. 75 years) or not managing. The optimal management regimes produces a mosaic landscape dominated by stands with herbaceous and low shrub understories, but also with a satisfactory representation of old understories, that was favorable in terms of both fire risk and biodiversity. The simulation-optimization procedure presented can be extended to incorporate a wide range of landscape dynamic processes, management rules and quantifiable objectives. It may thus be adapted to other socio-ecological systems, particularly where specific patterns of landscape heterogeneity are to be maintained despite imperfect management by multiple landowners. PMID:24465833</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvE..96f2614C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvE..96f2614C"><span>Folding time dependence of the motions of a molecular motor in an amorphous medium</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ciobotarescu, Simona; Bechelli, Solene; Rajonson, Gabriel; Migirditch, Samuel; Hester, Brooke; Hurduc, Nicolae; Teboul, Victor</p> <p>2017-12-01</p> <p>We investigate the dependence of the displacements of a molecular motor embedded inside a glassy material on its folding characteristic time τf. We observe two different time regimes. For slow foldings (regime I) the diffusion evolves very slowly with τf, while for rapid foldings (regime II) the diffusion increases strongly with τf(D ≈τf-2 ), suggesting two different physical mechanisms. We find that in regime I the motor's displacement during the folding process is counteracted by a reverse displacement during the unfolding, while in regime II this counteraction is much weaker. We notice that regime I behavior is reminiscent of the scallop theorem that holds for larger motors in a continuous medium. We find that the difference in the efficiency of the motor's motion explains most of the observed difference between the two regimes. For fast foldings the motor trajectories differ significantly from the opposite trajectories induced by the following unfolding process, resulting in a more efficient global motion than for slow foldings. This result agrees with the fluctuation theorems expectation for time reversal mechanisms. In agreement with the fluctuation theorems we find that the motors are unexpectedly more efficient when they are generating more entropy, a result that can be used to increase dramatically the motor's motion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20170007322','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20170007322"><span>Radio Frequency Interference Detection for Passive Remote Sensing Using Eigenvalue Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schoenwald, Adam; Kim, Seung-Jun; Mohammed-Tano, Priscilla</p> <p>2017-01-01</p> <p>Radio frequency interference (RFI) can corrupt passive remote sensing measurements taken with microwave radiometers. With the increasingly utilized spectrum and the push for larger bandwidth radiometers, the likelihood of RFI contamination has grown significantly. In this work, an eigenvalue-based algorithm is developed to detect the presence of RFI and provide estimates of RFI-free radiation levels. Simulated tests show that the proposed detector outperforms conventional kurtosis-based RFI detectors in the low-to-medium interferece-to-noise-power-ratio (INR) regime under continuous wave (CW) and quadrature phase shift keying (QPSK) RFIs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20170004854','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20170004854"><span>Radio Frequency Interference Detection for Passive Remote Sensing Using Eigenvalue Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schoenwald, Adam J.; Kim, Seung-Jun; Mohammed, Priscilla N.</p> <p>2017-01-01</p> <p>Radio frequency interference (RFI) can corrupt passive remote sensing measurements taken with microwave radiometers. With the increasingly utilized spectrum and the push for larger bandwidth radiometers, the likelihood of RFI contamination has grown significantly. In this work, an eigenvalue-based algorithm is developed to detect the presence of RFI and provide estimates of RFI-free radiation levels. Simulated tests show that the proposed detector outperforms conventional kurtosis-based RFI detectors in the low-to-medium interference-to-noise-power-ratio (INR) regime under continuous wave (CW) and quadrature phase shift keying (QPSK) RFIs.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvC..97a5803A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvC..97a5803A"><span>Dynamics of fragment formation in neutron-rich matter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alcain, P. N.; Dorso, C. O.</p> <p>2018-01-01</p> <p>Background: Neutron stars are astronomical systems with nucleons subjected to extreme conditions. Due to the longer range Coulomb repulsion between protons, the system has structural inhomogeneities. Several interactions tailored to reproduce nuclear matter plus a screened Coulomb term reproduce these inhomogeneities known as nuclear pasta. These structural inhomogeneities, located in the crusts of neutron stars, can also arise in expanding systems depending on the thermodynamic conditions (temperature, proton fraction, etc.) and the expansion velocity. Purpose: We aim to find the dynamics of the fragment formation for expanding systems simulated according to the little big bang model. This expansion resembles the evolution of merging neutron stars. Method: We study the dynamics of the nucleons with semiclassical molecular dynamics models. Starting with an equilibrium configuration, we expand the system homogeneously until we arrive at an asymptotic configuration (i.e., very low final densities). We study, with four different cluster recognition algorithms, the fragment distribution throughout this expansion and the dynamics of the cluster formation. Results: Studying the topology of the equilibrium states, before the expansion, we reproduced the known pasta phases plus a novel phase we called pregnocchi, consisting of proton aggregates embedded in a neutron sea. We have identified different fragmentation regimes, depending on the initial temperature and fragment velocity. In particular, for the already mentioned pregnocchi, a neutron cloud surrounds the clusters during the early stages of the expansion, resulting in systems that give rise to configurations compatible with the emergence of the r process. Conclusions: We showed that a proper identification of the cluster distribution is highly dependent on the cluster recognition algorithm chosen, and found that the early cluster recognition algorithm (ECRA) was the most stable one. This approach allowed us to identify the dynamics of the fragment formation. These calculations pave the way to a comparison between Earth experiments and neutron star studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/12585','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/12585"><span>Rule-based mapping of fire-adapted vegetation and fire regimes for the Monongahela National Forest</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Melissa A. Thomas-Van Gundy; Gregory J. Nowacki; Thomas M. Schuler</p> <p>2007-01-01</p> <p>A rule-based approach was employed in GIS to map fire-adapted vegetation and fire regimes within the proclamation boundary of the Monongahela National Forest. Spatial analyses and maps were generated using ArcMap 9.1. The resulting fireadaptation scores were then categorized into standard fire regime groups. Fire regime group V (200+ yrs) was the most common, assigned...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013APS..DFDH23004H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013APS..DFDH23004H"><span>Modeling Interactions Among Turbulence, Gas-Phase Chemistry, Soot and Radiation Using Transported PDF Methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Haworth, Daniel</p> <p>2013-11-01</p> <p>The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5065176','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5065176"><span>Placental Growth Factor (PlGF) in Women with Suspected Pre-Eclampsia Prior to 35 Weeks’ Gestation: A Budget Impact Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Duckworth, Suzy; Seed, Paul T.; Mackillop, Lucy; Shennan, Andrew H.; Hunter, Rachael</p> <p>2016-01-01</p> <p>Objective To model the resource implications of placental growth factor (PlGF) testing in women with suspected pre-eclampsia prior to 35 weeks’ gestation as part of a management algorithm, compared with current practice. Methods Data on resource use from 132 women with suspected pre-eclampsia prior to 35 weeks’ gestation, enrolled in a prospective observational cohort study evaluating PlGF measurement within antenatal assessment units within two UK consultant-led maternity units was extracted by case note review. A decision analytic model was developed using these data to establish the budget impact of managing women with suspected pre-eclampsia for two weeks from the date of PlGF testing, using a clinical management algorithm and reference cost tariffs. The main outcome measures of resource use (numbers of outpatient appointments, ultrasound investigations and hospital admissions) were correlated to final diagnosis and used to calculate comparative management regimes. Results The mean cost saving associated with the PlGF test (in the PlGF plus management arm) was £35,087 (95% CI -£33,181 to -£36,992) per 1,000 women. This equated to a saving of £582 (95% CI -552 to -£613) per woman tested. In 94% of iterations, PlGF testing was associated with cost saving compared to current practice. Conclusions This analysis suggests PlGF used as part of a clinical management algorithm in women presenting with suspected pre-eclampsia prior to 35 weeks’ gestation could provide cost savings by reducing unnecessary resource use. Introduction of PlGF testing could be used to direct appropriate resource allocation and overall would be cost saving. PMID:27741259</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25912342','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25912342"><span>Two-spoke placement optimization under explicit specific absorption rate and power constraints in parallel transmission at ultra-high field.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dupas, Laura; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre; Boulant, Nicolas</p> <p>2015-06-01</p> <p>The spokes method combined with parallel transmission is a promising technique to mitigate the B1(+) inhomogeneity at ultra-high field in 2D imaging. To date however, the spokes placement optimization combined with the magnitude least squares pulse design has never been done in direct conjunction with the explicit Specific Absorption Rate (SAR) and hardware constraints. In this work, the joint optimization of 2-spoke trajectories and RF subpulse weights is performed under these constraints explicitly and in the small tip angle regime. The problem is first considerably simplified by making the observation that only the vector between the 2 spokes is relevant in the magnitude least squares cost-function, thereby reducing the size of the parameter space and allowing a more exhaustive search. The algorithm starts from a set of initial k-space candidates and performs in parallel for all of them optimizations of the RF subpulse weights and the k-space locations simultaneously, under explicit SAR and power constraints, using an active-set algorithm. The dimensionality of the spoke placement parameter space being low, the RF pulse performance is computed for every location in k-space to study the robustness of the proposed approach with respect to initialization, by looking at the probability to converge towards a possible global minimum. Moreover, the optimization of the spoke placement is repeated with an increased pulse bandwidth in order to investigate the impact of the constraints on the result. Bloch simulations and in vivo T2(∗)-weighted images acquired at 7 T validate the approach. The algorithm returns simulated normalized root mean square errors systematically smaller than 5% in 10 s. Copyright © 2015 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H53I1600Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H53I1600Y"><span>Towards an Improved Represenation of Reservoirs and Water Management in a Land Surface-Hydrology Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yassin, F.; Anis, M. R.; Razavi, S.; Wheater, H. S.</p> <p>2017-12-01</p> <p>Water management through reservoirs, diversions, and irrigation have significantly changed river flow regimes and basin-wide energy and water balance cycles. Failure to represent these effects limits the performance of land surface-hydrology models not only for streamflow prediction but also for the estimation of soil moisture, evapotranspiration, and feedbacks to the atmosphere. Despite recent research to improve the representation of water management in land surface models, there remains a need to develop improved modeling approaches that work in complex and highly regulated basins such as the 406,000 km2 Saskatchewan River Basin (SaskRB). A particular challenge for regional and global application is a lack of local information on reservoir operational management. To this end, we implemented a reservoir operation, water abstraction, and irrigation algorithm in the MESH land surface-hydrology model and tested it over the SaskRB. MESH is Environment Canada's Land Surface-hydrology modeling system that couples Canadian Land Surface Scheme (CLASS) with hydrological routing model. The implemented reservoir algorithm uses an inflow-outflow relationship that accounts for the physical characteristics of reservoirs (e.g., storage-area-elevation relationships) and includes simplified operational characteristics based on local information (e.g., monthly target volume and release under limited, normal, and flood storage zone). The irrigation algorithm uses the difference between actual and potential evapotranspiration to estimate irrigation water demand. This irrigation demand is supplied from the neighboring reservoirs/diversion in the river system. We calibrated the model enabled with the new reservoir and irrigation modules in a multi-objective optimization setting. Results showed that the reservoir and irrigation modules significantly improved the MESH model performance in generating streamflow and evapotranspiration across the SaskRB and that this our approach provides a basis for improved large scale hydrological modelling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4624158','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4624158"><span>DNA-Binding Kinetics Determines the Mechanism of Noise-Induced Switching in Gene Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Tse, Margaret J.; Chu, Brian K.; Roy, Mahua; Read, Elizabeth L.</p> <p>2015-01-01</p> <p>Gene regulatory networks are multistable dynamical systems in which attractor states represent cell phenotypes. Spontaneous, noise-induced transitions between these states are thought to underlie critical cellular processes, including cell developmental fate decisions, phenotypic plasticity in fluctuating environments, and carcinogenesis. As such, there is increasing interest in the development of theoretical and computational approaches that can shed light on the dynamics of these stochastic state transitions in multistable gene networks. We applied a numerical rare-event sampling algorithm to study transition paths of spontaneous noise-induced switching for a ubiquitous gene regulatory network motif, the bistable toggle switch, in which two mutually repressive genes compete for dominant expression. We find that the method can efficiently uncover detailed switching mechanisms that involve fluctuations both in occupancies of DNA regulatory sites and copy numbers of protein products. In addition, we show that the rate parameters governing binding and unbinding of regulatory proteins to DNA strongly influence the switching mechanism. In a regime of slow DNA-binding/unbinding kinetics, spontaneous switching occurs relatively frequently and is driven primarily by fluctuations in DNA-site occupancies. In contrast, in a regime of fast DNA-binding/unbinding kinetics, switching occurs rarely and is driven by fluctuations in levels of expressed protein. Our results demonstrate how spontaneous cell phenotype transitions involve collective behavior of both regulatory proteins and DNA. Computational approaches capable of simulating dynamics over many system variables are thus well suited to exploring dynamic mechanisms in gene networks. PMID:26488666</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvB..96c5152G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvB..96c5152G"><span>Diagrammatic Monte Carlo approach for diagrammatic extensions of dynamical mean-field theory: Convergence analysis of the dual fermion technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gukelberger, Jan; Kozik, Evgeny; Hafermann, Hartmut</p> <p>2017-07-01</p> <p>The dual fermion approach provides a formally exact prescription for calculating properties of a correlated electron system in terms of a diagrammatic expansion around dynamical mean-field theory (DMFT). Most practical implementations, however, neglect higher-order interaction vertices beyond two-particle scattering in the dual effective action and further truncate the diagrammatic expansion in the two-particle scattering vertex to a leading-order or ladder-type approximation. In this work, we compute the dual fermion expansion for the two-dimensional Hubbard model including all diagram topologies with two-particle interactions to high orders by means of a stochastic diagrammatic Monte Carlo algorithm. We benchmark the obtained self-energy against numerically exact diagrammatic determinant Monte Carlo simulations to systematically assess convergence of the dual fermion series and the validity of these approximations. We observe that, from high temperatures down to the vicinity of the DMFT Néel transition, the dual fermion series converges very quickly to the exact solution in the whole range of Hubbard interactions considered (4 ≤U /t ≤12 ), implying that contributions from higher-order vertices are small. As the temperature is lowered further, we observe slower series convergence, convergence to incorrect solutions, and ultimately divergence. This happens in a regime where magnetic correlations become significant. We find, however, that the self-consistent particle-hole ladder approximation yields reasonable and often even highly accurate results in this regime.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22521914-multi-parametric-study-rising-buoyant-flux-tubes-adiabatic-stratification-using-amr','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22521914-multi-parametric-study-rising-buoyant-flux-tubes-adiabatic-stratification-using-amr"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Martínez-Sykora, Juan; Cheung, Mark C. M.; Moreno-Insertis, Fernando</p> <p></p> <p>We study the buoyant rise of magnetic flux tubes embedded in an adiabatic stratification using two-and three-dimensional, magnetohydrodynamic simulations. We analyze the dependence of the tube evolution on the field line twist and on the curvature of the tube axis in different diffusion regimes. To be able to achieve a comparatively high spatial resolution we use the FLASH code, which has a built-in Adaptive Mesh Refinement (AMR) capability. Our 3D experiments reach Reynolds numbers that permit a reasonable comparison of the results with those of previous 2D simulations. When the experiments are run without AMR, hence with a comparatively largemore » diffusivity, the amount of longitudinal magnetic flux retained inside the tube increases with the curvature of the tube axis. However, when a low-diffusion regime is reached by using the AMR algorithms, the magnetic twist is able to prevent the splitting of the magnetic loop into vortex tubes and the loop curvature does not play any significant role. We detect the generation of vorticity in the main body of the tube of opposite sign on the opposite sides of the apex. This is a consequence of the inhomogeneity of the azimuthal component of the field on the flux surfaces. The lift force associated with this global vorticity makes the flanks of the tube move away from their initial vertical plane in an antisymmetric fashion. The trajectories have an oscillatory motion superimposed, due to the shedding of vortex rolls to the wake, which creates a Von Karman street.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JCoPh.354..320K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JCoPh.354..320K"><span>Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.</p> <p>2018-02-01</p> <p>A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120015662','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120015662"><span>Robust, Practical Adaptive Control for Launch Vehicles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Orr, Jeb. S.; VanZwieten, Tannen S.</p> <p>2012-01-01</p> <p>A modern mechanization of a classical adaptive control concept is presented with an application to launch vehicle attitude control systems. Due to a rigorous flight certification environment, many adaptive control concepts are infeasible when applied to high-risk aerospace systems; methods of stability analysis are either intractable for high complexity models or cannot be reconciled in light of classical requirements. Furthermore, many adaptive techniques appearing in the literature are not suitable for application to conditionally stable systems with complex flexible-body dynamics, as is often the case with launch vehicles. The present technique is a multiplicative forward loop gain adaptive law similar to that used for the NASA X-15 flight research vehicle. In digital implementation with several novel features, it is well-suited to application on aerodynamically unstable launch vehicles with thrust vector control via augmentation of the baseline attitude/attitude-rate feedback control scheme. The approach is compatible with standard design features of autopilots for launch vehicles, including phase stabilization of lateral bending and slosh via linear filters. In addition, the method of assessing flight control stability via classical gain and phase margins is not affected under reasonable assumptions. The algorithm s ability to recover from certain unstable operating regimes can in fact be understood in terms of frequency-domain criteria. Finally, simulation results are presented that confirm the ability of the algorithm to improve performance and robustness in realistic failure scenarios.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22611653','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22611653"><span>Social inequalities in "sickness": does welfare state regime type make a difference? A multilevel analysis of men and women in 26 European countries.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>van der Wel, Kjetil A; Dahl, Espen; Thielen, Karsten</p> <p>2012-01-01</p> <p>In comparative studies of health inequalities, public health researchers have usually studied only disease and illness. Recent studies have also examined the sickness dimension of health, that is, the extent to which ill health is accompanied by joblessness, and how this association varies by education within different welfare contexts. This research has used either a limited number of countries or quantitative welfare state measures in studies of many countries. In this study, the authors expand on this knowledge by investigating whether a regime approach to the welfare state produces consistent results. They analyze data from the European Union Statistics on Income and Living Conditions (EU-SILC); health was measured by limiting longstanding illness (LLSI). Results show that for both men and women reporting LLSI in combination with low educational level, the probabilities of non-employment were particularly high in the Anglo-Saxon and Eastern welfare regimes, and lowest in the Scandinavian regime. For men, absolute and relative social inequalities in sickness were lowest in the Southern regime; for women, inequalities were lowest in the Scandinavian regime. The authors conclude that the Scandinavian welfare regime is more able than other regimes to protect against non-employment in the face of illness, especially for individuals with low educational level.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1326059','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1326059"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Stevens, Mark J.; Saleh, Omar A.</p> <p></p> <p>We calculated the force-extension curves for a flexible polyelectrolyte chain with varying charge separations by performing Monte Carlo simulations of a 5000 bead chain using a screened Coulomb interaction. At all charge separations, the force-extension curves exhibit a Pincus-like scaling regime at intermediate forces and a logarithmic regime at large forces. As the charge separation increases, the Pincus regime shifts to a larger range of forces and the logarithmic regime starts are larger forces. We also found that force-extension curve for the corresponding neutral chain has a logarithmic regime. Decreasing the diameter of bead in the neutral chain simulations removedmore » the logarithmic regime, and the force-extension curve tends to the freely jointed chain limit. In conclusion, this result shows that only excluded volume is required for the high force logarithmic regime to occur.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPA....8f5004S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPA....8f5004S"><span>Numerical investigation of flow past 17-cylinder array of square cylinders</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shams-ul-Islam, Nazeer, Ghazala; Ying, Zhou Chao</p> <p>2018-06-01</p> <p>In this work, flow past 17-cylinder array is simulated using the two-dimensional lattice Boltzmann method. Effect of gap spacings (0.5 ≤ gx* ≤ 3, 0.5 ≤ gy* ≤ 3) and Reynolds number (Re = 75 - 150) is analyzed in details. Results are presented in the form of vorticity contours plots, time-histories of drag and lift coefficients and power spectrum of lift coefficient. Six distinct flow regimes are identified for different gap spacings and Reynolds numbers: steady flow regime, single bluff body flow regime, non-fully developed flow regime, chaotic flow regime, quasi-periodic-I flow regime and quasi-periodic-II flow regime. Chaotic flow regime is the mostly observed flow regime while the single bluff body flow regime rarely occurs for this configuration. It is observed that drag force along each cylinder in 17-cylinder array decreases in the streamwise direction for fixed Reynold number and gap spacing. C1 and C2 cylinders experience the maximum drag at small gap spacing and Reynolds number. Also the Reynolds number is found to be more effective on flow characteristics as compared to gap spacings.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018A%26A...610A..12B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018A%26A...610A..12B"><span>Clustering the Orion B giant molecular cloud based on its molecular emission</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bron, Emeric; Daudon, Chloé; Pety, Jérôme; Levrier, François; Gerin, Maryvonne; Gratier, Pierre; Orkisz, Jan H.; Guzman, Viviana; Bardeau, Sébastien; Goicoechea, Javier R.; Liszt, Harvey; Öberg, Karin; Peretto, Nicolas; Sievers, Albrecht; Tremblin, Pascal</p> <p>2018-02-01</p> <p>Context. Previous attempts at segmenting molecular line maps of molecular clouds have focused on using position-position-velocity data cubes of a single molecular line to separate the spatial components of the cloud. In contrast, wide field spectral imaging over a large spectral bandwidth in the (sub)mm domain now allows one to combine multiple molecular tracers to understand the different physical and chemical phases that constitute giant molecular clouds (GMCs). Aims: We aim at using multiple tracers (sensitive to different physical processes and conditions) to segment a molecular cloud into physically/chemically similar regions (rather than spatially connected components), thus disentangling the different physical/chemical phases present in the cloud. Methods: We use a machine learning clustering method, namely the Meanshift algorithm, to cluster pixels with similar molecular emission, ignoring spatial information. Clusters are defined around each maximum of the multidimensional probability density function (PDF) of the line integrated intensities. Simple radiative transfer models were used to interpret the astrophysical information uncovered by the clustering analysis. Results: A clustering analysis based only on the J = 1-0 lines of three isotopologues of CO proves sufficient to reveal distinct density/column density regimes (nH 100 cm-3, 500 cm-3, and >1000 cm-3), closely related to the usual definitions of diffuse, translucent and high-column-density regions. Adding two UV-sensitive tracers, the J = 1-0 line of HCO+ and the N = 1-0 line of CN, allows us to distinguish two clearly distinct chemical regimes, characteristic of UV-illuminated and UV-shielded gas. The UV-illuminated regime shows overbright HCO+ and CN emission, which we relate to a photochemical enrichment effect. We also find a tail of high CN/HCO+ intensity ratio in UV-illuminated regions. Finer distinctions in density classes (nH 7 × 103 cm-3, 4 × 104 cm-3) for the densest regions are also identified, likely related to the higher critical density of the CN and HCO+ (1-0) lines. These distinctions are only possible because the high-density regions are spatially resolved. Conclusions: Molecules are versatile tracers of GMCs because their line intensities bear the signature of the physics and chemistry at play in the gas. The association of simultaneous multi-line, wide-field mapping and powerful machine learning methods such as the Meanshift clustering algorithm reveals how to decode the complex information available in these molecular tracers. Data products associated with this paper are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/610/A12 and at http://www.iram.fr/ pety/ORION-B</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2988831','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2988831"><span>Comparing implementations of penalized weighted least-squares sinogram restoration</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick</p> <p>2010-01-01</p> <p>Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. Results: All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors’ previous penalized-likelihood implementation. Conclusions: Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes. PMID:21158306</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017CTM....21..646N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017CTM....21..646N"><span>A Burke-Schumann analysis of diffusion-flame structures supported by a burning droplet</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nayagam, Vedha; Dietrich, Daniel L.; Williams, Forman A.</p> <p>2017-07-01</p> <p>A Burke-Schumann description of three different regimes of combustion of a fuel droplet in an oxidising atmosphere, namely the premixed-flame regime, the partial-burning regime and the diffusion-flame regime, is presented by treating the fuel and oxygen leakage fractions through the flame as known parameters. The analysis shows that the burning-rate constant, the flame-standoff ratio, and the flame temperature in these regimes can be obtained from the classical droplet-burning results by suitable definitions of an effective ambient oxygen mass fraction and an effective fuel concentration in the droplet interior. The results show that increasing oxygen leakage alone through the flame lowers both the droplet burning rate and the flame temperature, whereas leakage of fuel alone leaves the burning rate unaffected while reducing the flame temperature and moving the flame closer to the droplet surface. Solutions for the partial-burning regime are shown to exist only for a limited range of fuel and oxygen leakage fractions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70018805','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70018805"><span>Frictional slip of granite at hydrothermal conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Blanpied, M.L.; Lockner, D.A.; Byerlee, J.D.</p> <p>1995-01-01</p> <p>To measure the strength, sliding behavior, and friction constitutive properties of faults at hydrothermal conditions, laboratory granite faults containing a layer of granite powder (simulated gouge) were slid. The mechanical results define two regimes. The first regime includes dry granite up to at least 845?? and wet granite below 250??C. In this regime the coefficient of friction is high (?? = 0.7 to 0.8) and depends only modestly on temperature, slip rate, and PH2O. The second regime includes wet granite above ~350??C. In this regime friction decreases considerably with increasing temperature (temperature weakening) and with decreasing slip rate (velocity strengthening). These regimes correspond well to those identified in sliding tests on ultrafine quartz. The results highlight the importance of fluid-assisted deformation processes active in faults at depth and the need for laboratory studies on the roles of additional factors such as fluid chemistry, large displacements, higher concentrations of phyllosilicates, and time-dependent fault healing. -from Authors</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1431063-optimal-structure-parameter-learning-ising-models','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1431063-optimal-structure-parameter-learning-ising-models"><span>Optimal structure and parameter learning of Ising models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant; ...</p> <p>2018-03-16</p> <p>Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ApPhA.124..237M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ApPhA.124..237M"><span>Nanosecond laser ablation of target Al in a gaseous medium: explosive boiling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mazhukin, V. I.; Mazhukin, A. V.; Demin, M. M.; Shapranov, A. V.</p> <p>2018-03-01</p> <p>An approximate mathematical description of the processes of homogeneous nucleation and homogeneous evaporation (explosive boiling) of a metal target (Al) under the influence of ns laser radiation is proposed in the framework of the hydrodynamic model. Within the continuum approach, a multi-phase, multi-front hydrodynamic model and a computational algorithm are designed to simulate nanosecond laser ablation of the metal targets immersed in gaseous media. The proposed approach is intended for modeling and detailed analysis of the mechanisms of heterogeneous and homogeneous evaporation and their interaction with each other. It is shown that the proposed model and computational algorithm allow modeling of interrelated mechanisms of heterogeneous and homogeneous evaporation of metals, manifested in the form of pulsating explosive boiling. Modeling has shown that explosive evaporation in metals is due to the presence of a near-surface temperature maximum. It has been established that in nanosecond pulsed laser ablation, such exposure regimes can be implemented in which phase explosion is the main mechanism of material removal.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1431063','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1431063"><span>Optimal structure and parameter learning of Ising models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant</p> <p></p> <p>Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27813272','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27813272"><span>Experiment for validation of fluid-structure interaction models and algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hessenthaler, A; Gaddum, N R; Holub, O; Sinkus, R; Röhrle, O; Nordsletten, D</p> <p>2017-09-01</p> <p>In this paper a fluid-structure interaction (FSI) experiment is presented. The aim of this experiment is to provide a challenging yet easy-to-setup FSI test case that addresses the need for rigorous testing of FSI algorithms and modeling frameworks. Steady-state and periodic steady-state test cases with constant and periodic inflow were established. Focus of the experiment is on biomedical engineering applications with flow being in the laminar regime with Reynolds numbers 1283 and 651. Flow and solid domains were defined using computer-aided design (CAD) tools. The experimental design aimed at providing a straightforward boundary condition definition. Material parameters and mechanical response of a moderately viscous Newtonian fluid and a nonlinear incompressible solid were experimentally determined. A comprehensive data set was acquired by using magnetic resonance imaging to record the interaction between the fluid and the solid, quantifying flow and solid motion. Copyright © 2016 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016APS..MARS22001L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016APS..MARS22001L"><span>Scalable real space pseudopotential density functional codes for materials in the exascale regime</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lena, Charles; Chelikowsky, James; Schofield, Grady; Biller, Ariel; Kronik, Leeor; Saad, Yousef; Deslippe, Jack</p> <p></p> <p>Real-space pseudopotential density functional theory has proven to be an efficient method for computing the properties of matter in many different states and geometries, including liquids, wires, slabs, and clusters with and without spin polarization. Fully self-consistent solutions using this approach have been routinely obtained for systems with thousands of atoms. Yet, there are many systems of notable larger sizes where quantum mechanical accuracy is desired, but scalability proves to be a hindrance. Such systems include large biological molecules, complex nanostructures, or mismatched interfaces. We will present an overview of our new massively parallel algorithms, which offer improved scalability in preparation for exascale supercomputing. We will illustrate these algorithms by considering the electronic structure of a Si nanocrystal exceeding 104 atoms. Support provided by the SciDAC program, Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-FG02-12ER4 (Berkeley).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AAS...22714717Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AAS...22714717Z"><span>Post-processing images from the WFIRST-AFTA coronagraph testbed</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zimmerman, Neil T.; Ygouf, Marie; Pueyo, Laurent; Soummer, Remi; Perrin, Marshall D.; Mennesson, Bertrand; Cady, Eric; Mejia Prada, Camilo</p> <p>2016-01-01</p> <p>The concept for the exoplanet imaging instrument on WFIRST-AFTA relies on the development of mission-specific data processing tools to reduce the speckle noise floor. No instruments have yet functioned on the sky in the planet-to-star contrast regime of the proposed coronagraph (1E-8). Therefore, starlight subtraction algorithms must be tested on a combination of simulated and laboratory data sets to give confidence that the scientific goals can be reached. The High Contrast Imaging Testbed (HCIT) at Jet Propulsion Lab has carried out several technology demonstrations for the instrument concept, demonstrating 1E-8 raw (absolute) contrast. Here, we have applied a mock reference differential imaging strategy to HCIT data sets, treating one subset of images as a reference star observation and another subset as a science target observation. We show that algorithms like KLIP (Karhunen-Loève Image Projection), by suppressing residual speckles, enable the recovery of exoplanet signals at contrast of order 2E-9.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1414922-discontinuous-galerkin-algorithms-fully-kinetic-plasmas','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1414922-discontinuous-galerkin-algorithms-fully-kinetic-plasmas"><span>Discontinuous Galerkin algorithms for fully kinetic plasmas</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Juno, J.; Hakim, A.; TenBarge, J.</p> <p></p> <p>Here, we present a new algorithm for the discretization of the non-relativistic Vlasov–Maxwell system of equations for the study of plasmas in the kinetic regime. Using the discontinuous Galerkin finite element method for the spatial discretization, we obtain a high order accurate solution for the plasma's distribution function. Time stepping for the distribution function is done explicitly with a third order strong-stability preserving Runge–Kutta method. Since the Vlasov equation in the Vlasov–Maxwell system is a high dimensional transport equation, up to six dimensions plus time, we take special care to note various features we have implemented to reduce the costmore » while maintaining the integrity of the solution, including the use of a reduced high-order basis set. A series of benchmarks, from simple wave and shock calculations, to a five dimensional turbulence simulation, are presented to verify the efficacy of our set of numerical methods, as well as demonstrate the power of the implemented features.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1414922-discontinuous-galerkin-algorithms-fully-kinetic-plasmas','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1414922-discontinuous-galerkin-algorithms-fully-kinetic-plasmas"><span>Discontinuous Galerkin algorithms for fully kinetic plasmas</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Juno, J.; Hakim, A.; TenBarge, J.; ...</p> <p>2017-10-10</p> <p>Here, we present a new algorithm for the discretization of the non-relativistic Vlasov–Maxwell system of equations for the study of plasmas in the kinetic regime. Using the discontinuous Galerkin finite element method for the spatial discretization, we obtain a high order accurate solution for the plasma's distribution function. Time stepping for the distribution function is done explicitly with a third order strong-stability preserving Runge–Kutta method. Since the Vlasov equation in the Vlasov–Maxwell system is a high dimensional transport equation, up to six dimensions plus time, we take special care to note various features we have implemented to reduce the costmore » while maintaining the integrity of the solution, including the use of a reduced high-order basis set. A series of benchmarks, from simple wave and shock calculations, to a five dimensional turbulence simulation, are presented to verify the efficacy of our set of numerical methods, as well as demonstrate the power of the implemented features.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ApSS..406..136V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ApSS..406..136V"><span>Investigation of transient dynamics of capillary assisted particle assembly yield</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Virganavičius, D.; Juodėnas, M.; Tamulevičius, T.; Schift, H.; Tamulevičius, S.</p> <p>2017-06-01</p> <p>In this paper, the transient behavior of the particle assembly yield dynamics when switching from low yield to high yield deposition at different velocity and thermal regimes is investigated. Capillary force assisted particle assembly (CAPA) using colloidal suspension of green fluorescent 270 nm diameter polystyrene beads was performed on patterned poly (dimethyl siloxane) substrates using a custom-built deposition setup. Two types of patterns with different trapping site densities were used to assess CAPA process dynamics and the influence of pattern density and geometry on the deposition yield transitions. Closely packed 300 nm diameter circular pits ordered in hexagonal arrangement with 300 nm pitch, and 2 × 2 mm2 square pits with 2 μm spacing were used. 2-D regular structures of the deposited particles were investigated by means of optical fluorescence and scanning electron microscopy. The fluorescence micrographs were analyzed using a custom algorithm enabling to identify particles and calculate efficiency of the deposition performed at different regimes. Relationship between the spatial distribution of particles in transition zone and ambient conditions was evaluated and quantified by approximation of the yield profile with a logistic function.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AIPC.1643..305M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AIPC.1643..305M"><span>Fitting a circular distribution based on nonnegative trigonometric sums for wind direction in Malaysia</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Masseran, Nurulkamal; Razali, Ahmad Mahir; Ibrahim, Kamarulzaman; Zaharim, Azami; Sopian, Kamaruzzaman</p> <p>2015-02-01</p> <p>Wind direction has a substantial effect on the environment and human lives. As examples, the wind direction influences the dispersion of particulate matter in the air and affects the construction of engineering structures, such as towers, bridges, and tall buildings. Therefore, a statistical analysis of the wind direction provides important information about the wind regime at a particular location. In addition, knowledge of the wind direction and wind speed can be used to derive information about the energy potential. This study investigated the characteristics of the wind regime of Mersing, Malaysia. A circular distribution based on Nonnegative Trigonometric Sums (NNTS) was fitted to a histogram of the average hourly wind direction data. The Newton-like manifold algorithm was used to estimate the parameter of each component of the NNTS model. Next, the suitability of each NNTS model was judged based on a graphical representation and Akaike's Information Criteria. The study found that the NNTS model with six or more components was able to fit the wind directional data for the Mersing station.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4040593','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4040593"><span>Blended particle filters for large-dimensional chaotic dynamical systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Majda, Andrew J.; Qi, Di; Sapsis, Themistoklis P.</p> <p>2014-01-01</p> <p>A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.891a2114K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.891a2114K"><span>The calculating study of the moisture transfer influence at the temperature field in a porous wet medium with internal heat sources</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kuzevanov, V. S.; Garyaev, A. B.; Zakozhurnikova, G. S.; Zakozhurnikov, S. S.</p> <p>2017-11-01</p> <p>A porous wet medium with solid and gaseous components, with distributed or localized heat sources was considered. The regimes of temperature changes at the heating at various initial material moisture were studied. Mathematical model was developed applied to the investigated wet porous multicomponent medium with internal heat sources, taking into account the transfer of the heat by heat conductivity with variable thermal parameters and porosity, heat transfer by radiation, chemical reactions, drying and moistening of solids, heat and mass transfer of volatile products of chemical reactions by flows filtration, transfer of moisture. The algorithm of numerical calculation and the computer program that implements the proposed mathematical model, allowing to study the dynamics of warming up at a local or distributed heat release, in particular the impact of the transfer of moisture in the medium on the temperature field were created. Graphs of temperature change were obtained at different points of the graphics with different initial moisture. Conclusions about the possible control of the regimes of heating a solid porous body by the initial moisture distribution were made.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008SPIE.7015E..5HK','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008SPIE.7015E..5HK"><span>Implementation of the pyramid wavefront sensor as a direct phase detector for large amplitude aberrations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kupke, Renate; Gavel, Don; Johnson, Jess; Reinig, Marc</p> <p>2008-07-01</p> <p>We investigate the non-modulating pyramid wave-front sensor's (P-WFS) implementation in the context of Lick Observatory's Villages visible light AO system on the Nickel 1-meter telescope. A complete adaptive optics correction, using a non-modulated P-WFS in slope sensing mode as a boot-strap to a regime in which the P-WFS can act as a direct phase sensor is explored. An iterative approach to reconstructing the wave-front phase, given the pyramid wave-front sensor's non-linear signal, is developed. Using Monte Carlo simulations, the iterative reconstruction method's photon noise propagation behavior is compared to both the pyramid sensor used in slope-sensing mode, and the traditional Shack Hartmann sensor's theoretical performance limits. We determine that bootstrapping using the P-WFS as a slope sensor does not offer enough correction to bring the phase residuals into a regime in which the iterative algorithm can provide much improvement in phase measurement. It is found that both the iterative phase reconstructor and the slope reconstruction methods offer an advantage in noise propagation over Shack Hartmann sensors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9976E..0HT','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9976E..0HT"><span>Statistical modeling of natural backgrounds in hyperspectral LWIR data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Truslow, Eric; Manolakis, Dimitris; Cooley, Thomas; Meola, Joseph</p> <p>2016-09-01</p> <p>Hyperspectral sensors operating in the long wave infrared (LWIR) have a wealth of applications including remote material identification and rare target detection. While statistical models for modeling surface reflectance in visible and near-infrared regimes have been well studied, models for the temperature and emissivity in the LWIR have not been rigorously investigated. In this paper, we investigate modeling hyperspectral LWIR data using a statistical mixture model for the emissivity and surface temperature. Statistical models for the surface parameters can be used to simulate surface radiances and at-sensor radiance which drives the variability of measured radiance and ultimately the performance of signal processing algorithms. Thus, having models that adequately capture data variation is extremely important for studying performance trades. The purpose of this paper is twofold. First, we study the validity of this model using real hyperspectral data, and compare the relative variability of hyperspectral data in the LWIR and visible and near-infrared (VNIR) regimes. Second, we illustrate how materials that are easily distinguished in the VNIR, may be difficult to separate when imaged in the LWIR.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20020073222&hterms=TOM&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DTOM','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20020073222&hterms=TOM&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DTOM"><span>Tropospheric Ozone during the TRACE-P Mission: Comparison between TOMS Satellite Retrievals and Aircraft Lidar Data, March 2001</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Frolov, A. D.; Thompson, A. M.; Hudson, R. D.; Browell, E. V.; Oltmans, S. J.; Witte, J. C.; Bhartia, P. K. (Technical Monitor)</p> <p>2002-01-01</p> <p>Over the past several years, we have developed two new tropospheric ozone retrievals from the TOMS (Total Ozone Mapping Spectrometer) satellite instrument that are of sufficient resolution to follow pollution episodes. The modified-residual technique uses v. 7 TOMS total ozone and is applicable to tropical regimes in which the wave-one pattern in total ozone is observed. The TOMS-direct method ('TDOT' = TOMS Direct Ozone in the Troposphere) represents a new algorithm that uses TOMS radiances directly to extract tropospheric ozone in regions of constant stratospheric ozone. It is not geographically restricted, using meteorological regimes as the basis for classifying TOMS radiances and for selecting appropriate comparison data. TDOT is useful where tropospheric ozone displays high mixing ratios and variability characteristic of pollution. Some of these episodes were observed downwind of Asian biomass burning during the TRACE-P (Transport and Atmospheric Chemical Evolution-Pacific) field experiment in March 2001. This paper features comparisons among TDOT tropospheric ozone column depth, integrated uv-DIAL measurements made from NASA's DC-8, and ozonesonde data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.A13G0301S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.A13G0301S"><span>Mixture distributions of wind speed in the UAE</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shin, J.; Ouarda, T.; Lee, T. S.</p> <p>2013-12-01</p> <p>Wind speed probability distribution is commonly used to estimate potential wind energy. The 2-parameter Weibull distribution has been most widely used to characterize the distribution of wind speed. However, it is unable to properly model wind speed regimes when wind speed distribution presents bimodal and kurtotic shapes. Several studies have concluded that the Weibull distribution should not be used for frequency analysis of wind speed without investigation of wind speed distribution. Due to these mixture distributional characteristics of wind speed data, the application of mixture distributions should be further investigated in the frequency analysis of wind speed. A number of studies have investigated the potential wind energy in different parts of the Arabian Peninsula. Mixture distributional characteristics of wind speed were detected from some of these studies. Nevertheless, mixture distributions have not been employed for wind speed modeling in the Arabian Peninsula. In order to improve our understanding of wind energy potential in Arabian Peninsula, mixture distributions should be tested for the frequency analysis of wind speed. The aim of the current study is to assess the suitability of mixture distributions for the frequency analysis of wind speed in the UAE. Hourly mean wind speed data at 10-m height from 7 stations were used in the current study. The Weibull and Kappa distributions were employed as representatives of the conventional non-mixture distributions. 10 mixture distributions are used and constructed by mixing four probability distributions such as Normal, Gamma, Weibull and Extreme value type-one (EV-1) distributions. Three parameter estimation methods such as Expectation Maximization algorithm, Least Squares method and Meta-Heuristic Maximum Likelihood (MHML) method were employed to estimate the parameters of the mixture distributions. In order to compare the goodness-of-fit of tested distributions and parameter estimation methods for sample wind data, the adjusted coefficient of determination, Bayesian Information Criterion (BIC) and Chi-squared statistics were computed. Results indicate that MHML presents the best performance of parameter estimation for the used mixture distributions. In most of the employed 7 stations, mixture distributions give the best fit. When the wind speed regime shows mixture distributional characteristics, most of these regimes present the kurtotic statistical characteristic. Particularly, applications of mixture distributions for these stations show a significant improvement in explaining the whole wind speed regime. In addition, the Weibull-Weibull mixture distribution presents the best fit for the wind speed data in the UAE.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMSM33B2653S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMSM33B2653S"><span>Investigating Whistler Mode Wave Diffusion Coefficients at Mars</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shane, A. D.; Liemohn, M. W.; Xu, S.; Florie, C.</p> <p>2017-12-01</p> <p>Observations of electron pitch angle distributions have suggested collisions are not the only pitch angle scattering process occurring in the Martian ionosphere. This unknown scattering process is causing high energy electrons (>100 eV) to become isotropized. Whistler mode waves are one pitch angle scattering mechanism known to preferentially scatter high energy electrons in certain plasma regimes. The distribution of whistler mode wave diffusion coefficients are dependent on the background magnetic field strength and thermal electron density, as well as the frequency and wave normal angle of the wave. We have solved for the whistler mode wave diffusion coefficients using the quasi-linear diffusion equations and have integrated them into a superthermal electron transport (STET) model. Preliminary runs have produced results that qualitatively match the observed electron pitch angle distributions at Mars. We performed parametric sweeps over magnetic field, thermal electron density, wave frequency, and wave normal angle to understand the relationship between the plasma parameters and the diffusion coefficient distributions, but also to investigate what regimes whistler mode waves scatter only high energy electrons. Increasing the magnetic field strength and lowering the thermal electron density shifts the distribution of diffusion coefficients toward higher energies and lower pitch angles. We have created an algorithm to identify Mars Atmosphere Volatile and EvolutioN (MAVEN) observations of high energy isotropic pitch angle distributions in the Martian ionosphere. We are able to map these distributions at Mars, and compare the conditions under which these are observed at Mars with the results of our parametric sweeps. Lastly, we will also look at each term in the kinetic diffusion equation to determine if the energy and mixed diffusion coefficients are important enough to incorporate into STET as well.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AMT....11..161N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AMT....11..161N"><span>Error sources in the retrieval of aerosol information over bright surfaces from satellite measurements in the oxygen A band</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nanda, Swadhin; de Graaf, Martin; Sneep, Maarten; de Haan, Johan F.; Stammes, Piet; Sanders, Abram F. J.; Tuinder, Olaf; Pepijn Veefkind, J.; Levelt, Pieternel F.</p> <p>2018-01-01</p> <p><p class="p">Retrieving aerosol optical thickness and aerosol layer height over a bright surface from measured top-of-atmosphere reflectance spectrum in the oxygen A band is known to be challenging, often resulting in large errors. In certain atmospheric conditions and viewing geometries, a loss of sensitivity to aerosol optical thickness has been reported in the literature. This loss of sensitivity has been attributed to a phenomenon known as critical surface albedo regime, which is a range of surface albedos for which the top-of-atmosphere reflectance has minimal sensitivity to aerosol optical thickness. This paper extends the concept of critical surface albedo for aerosol layer height retrievals in the oxygen A band, and discusses its implications. The underlying physics are introduced by analysing the top-of-atmosphere reflectance spectrum as a sum of atmospheric path contribution and surface contribution, obtained using a radiative transfer model. Furthermore, error analysis of an aerosol layer height retrieval algorithm is conducted over dark and bright surfaces to show the dependence on surface reflectance. The analysis shows that the derivative with respect to aerosol layer height of the atmospheric path contribution to the top-of-atmosphere reflectance is opposite in sign to that of the surface contribution - an increase in surface brightness results in a decrease in information content. In the case of aerosol optical thickness, these derivatives are anti-correlated, leading to large retrieval errors in high surface albedo regimes. The consequence of this anti-correlation is demonstrated with measured spectra in the oxygen A band from the GOME-2 instrument on board the Metop-A satellite over the 2010 Russian wildfires incident.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/978002','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/978002"><span>The Genesis Mission: Solar Wind Conditions, and Implications for the FIP Fractionation of the Solar Wind.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Reisenfeld, D. B.; Wiens, R. C.; Barraclough, B. L.</p> <p>2005-01-01</p> <p>The NASA Genesis mission collected solar wind on ultrapure materials between November 30, 2001 and April 1, 2004. The samples were returned to Earth September 8, 2004. Despite the hard landing that resulted from a failure of the avionics to deploy the parachute, many samples were returned in a condition that will permit analyses. Sample analyses of these samples should give a far better understanding of the solar elemental and isotopic composition (Burnett et al. 2003). Further, the photospheric composition is thought to be representative of the solar nebula, so that the Genesis mission will provide a new baseline formore » the average solar nebula composition with which to compare present-day compositions of planets, meteorites, and asteroids. Sample analysis is currently underway. The Genesis samples must be placed in the context of the solar and solar wind conditions under which they were collected. Solar wind is fractionated from the photosphere by the forces that accelerate the ions off of the Sun. This fractionation appears to be ordered by the first ionization potential (FIP) of the elements, with the tendency for low-FIP elements to be over-abundant in the solar wind relative to the photosphere, and high-FIP elements to be under-abundant (e.g. Geiss, 1982; von Steiger et al., 2000). In addition, the extent of elemental fractionation differs across different solarwind regimes. Therefore, Genesis collected solar wind samples sorted into three regimes: 'fast wind' or 'coronal hole' (CH), 'slow wind' or 'interstream' (IS), and 'coronal mass ejection' (CME). To carry this out, plasma ion and electron spectrometers (Barraclough et al., 2003) continuously monitored the solar wind proton density, velocity, temperature, the alpha/proton ratio, and angular distribution of suprathermal electrons, and those parameters were in turn used in a rule-based algorithm that assigned the most probable solar wind regime (Neugebauer et al., 2003). At any given time, only one of three regime-specific collectors (CH, IS, or CME) was exposed to the solar wind. Here we report on the regime-specific solar wind conditions from in-situ instruments over the course of the collection period. Further, we use composition data from the SWICS (Solar Wind Ion Composition Spectrometer) instrument on ACE (McComas et al., 1998) to examine the FIP fractionation between solar wind regimes, and make a preliminary comparison of these to the FIP analysis of Ulysses/SWICS composition data (von Steiger et al. 2000). Our elemental fractionation study includes a reevaluation of the Ulysses FIP analysis in light of newly reported photospheric abundance data (Asplund, Grevesse & Sauval, 2005). The new abundance data indicate a metallicity (Z/X) for the Sun almost a factor of two lower than that reported in the widely used compilation of Anders & Grevesse (1989). The new photospheric abundances suggest a lower degree of solar wind fractionation than previously reported by von Steiger et al. (2000) for the first Ulysses polar orbit (1991-1998).« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110015332','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110015332"><span>Gas Phase Pressure Effects on the Apparent Thermal Conductivity of JSC-1A Lunar Regolith Simulant</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Yuan, Zeng-Guang; Kleinhenz, Julie E.</p> <p>2011-01-01</p> <p>Gas phase pressure effects on the apparent thermal conductivity of a JSC-1A/air mixture have been experimentally investigated under steady state thermal conditions from 10 kPa to 100 kPa. The result showed that apparent thermal conductivity of the JSC-1A/air mixture decreased when pressure was lowered to 80 kPa. At 10 kPa, the conductivity decreased to 0.145 W/m/degree C, which is significantly lower than 0.196 W/m/degree C at 100 kPa. This finding is consistent with the results of previous researchers. The reduction of the apparent thermal conductivity at low pressures is ascribed to the Knudsen effect. Since the characteristic length of the void space in bulk JSC-1A varies over a wide range, both the Knudsen regime and continuum regime can coexist in the pore space. The volume ratio of the two regimes varies with pressure. Thus, as gas pressure decreases, the gas volume controlled by Knudsen regime increases. Under Knudsen regime the resistance to the heat flow is higher than that in the continuum regime, resulting in the observed pressure dependency of the apparent thermal conductivity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28841243','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28841243"><span>Untangling outcomes of de jure and de facto community-based management of natural resources.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Agarwala, Meghna; Ginsberg, Joshua R</p> <p>2017-12-01</p> <p>We systematically reviewed the literature on the tragedy of the commons and common-property resources. We segregated studies by legal management regimes (de jure regimes) and management that develops in practice (de facto regimes) to understand how the structure of regime formation affects the outcome of community management on sustainability of resource use. De facto regimes, developed within the community, are more likely to have positive impacts on the resource. However, de facto regimes are fragile and not resilient in the face of increased population pressure and unregulated markets, and de facto management regimes are less successful where physical exclusion of external agents from resources is more difficult. Yet, formalization or imposition of de jure management regimes can have complicated impacts on sustainability. The imposition of de jure regimes usually has a negative outcome when existing de facto regimes operate at larger scales than the imposed de jure regime. In contrast, de jure regimes have largely positive impacts when the de facto regimes operate at scales smaller than the overlying de jure regimes. Formalization may also be counterproductive because of elite capture and the resulting de facto privatization (that allows elites to effectively exclude others) or de facto open access (where the disenfranchised may resort to theft and elites cannot effectively exclude them). This underscores that although the global movement to formalize community-management regimes may address some forms of inequity and may produce better outcomes, it does not ensure resource sustainability and may lead to greater marginalization of users. Comparison of governance systems that differentiate between initiatives that legitimize existing de facto regimes and systems that create new de facto regimes, investigations of new top-down de jure regimes, and studies that further examine different approaches to changing de jure regimes to de facto regimes are avenues for further inquiry. © 2017 Society for Conservation Biology.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21969994','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21969994"><span>Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, Part I: main content.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Orellana, Liliana; Rotnitzky, Andrea; Robins, James M</p> <p>2010-01-01</p> <p>Dynamic treatment regimes are set rules for sequential decision making based on patient covariate history. Observational studies are well suited for the investigation of the effects of dynamic treatment regimes because of the variability in treatment decisions found in them. This variability exists because different physicians make different decisions in the face of similar patient histories. In this article we describe an approach to estimate the optimal dynamic treatment regime among a set of enforceable regimes. This set is comprised by regimes defined by simple rules based on a subset of past information. The regimes in the set are indexed by a Euclidean vector. The optimal regime is the one that maximizes the expected counterfactual utility over all regimes in the set. We discuss assumptions under which it is possible to identify the optimal regime from observational longitudinal data. Murphy et al. (2001) developed efficient augmented inverse probability weighted estimators of the expected utility of one fixed regime. Our methods are based on an extension of the marginal structural mean model of Robins (1998, 1999) which incorporate the estimation ideas of Murphy et al. (2001). Our models, which we call dynamic regime marginal structural mean models, are specially suitable for estimating the optimal treatment regime in a moderately small class of enforceable regimes of interest. We consider both parametric and semiparametric dynamic regime marginal structural models. We discuss locally efficient, double-robust estimation of the model parameters and of the index of the optimal treatment regime in the set. In a companion paper in this issue of the journal we provide proofs of the main results.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014CoPhC.185.2391C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014CoPhC.185.2391C"><span>An energy- and charge-conserving, nonlinearly implicit, electromagnetic 1D-3V Vlasov-Darwin particle-in-cell algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, G.; Chacón, L.</p> <p>2014-10-01</p> <p>A recent proof-of-principle study proposes a nonlinear electrostatic implicit particle-in-cell (PIC) algorithm in one dimension (Chen et al., 2011). The algorithm employs a kinetically enslaved Jacobian-free Newton-Krylov (JFNK) method, and conserves energy and charge to numerical round-off. In this study, we generalize the method to electromagnetic simulations in 1D using the Darwin approximation to Maxwell's equations, which avoids radiative noise issues by ordering out the light wave. An implicit, orbit-averaged, time-space-centered finite difference scheme is employed in both the 1D Darwin field equations (in potential form) and the 1D-3V particle orbit equations to produce a discrete system that remains exactly charge- and energy-conserving. Furthermore, enabled by the implicit Darwin equations, exact conservation of the canonical momentum per particle in any ignorable direction is enforced via a suitable scattering rule for the magnetic field. We have developed a simple preconditioner that targets electrostatic waves and skin currents, and allows us to employ time steps O(√{mi /me } c /veT) larger than the explicit CFL. Several 1D numerical experiments demonstrate the accuracy, performance, and conservation properties of the algorithm. In particular, the scheme is shown to be second-order accurate, and CPU speedups of more than three orders of magnitude vs. an explicit Vlasov-Maxwell solver are demonstrated in the "cold" plasma regime (where kλD ≪ 1).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1258585-unexpected-series-regular-frequency-spacing-scuti-stars-non-asymptotic-regime-methodology','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1258585-unexpected-series-regular-frequency-spacing-scuti-stars-non-asymptotic-regime-methodology"><span>Unexpected series of regular frequency spacing of δ Scuti stars in the non-asymptotic regime - I. The methodology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Paparo, M.; Benko, J. M.; Hareter, M.; ...</p> <p>2016-05-11</p> <p>In this study, a sequence search method was developed to search the regular frequency spacing in δ Scuti stars through visual inspection and an algorithmic search. We searched for sequences of quasi-equally spaced frequencies, containing at least four members per sequence, in 90 δ Scuti stars observed by CoRoT. We found an unexpectedly large number of independent series of regular frequency spacing in 77 δ Scuti stars (from one to eight sequences) in the non-asymptotic regime. We introduce the sequence search method presenting the sequences and echelle diagram of CoRoT 102675756 and the structure of the algorithmic search. Four sequencesmore » (echelle ridges) were found in the 5–21 d –1 region where the pairs of the sequences are shifted (between 0.5 and 0.59 d –1) by twice the value of the estimated rotational splitting frequency (0.269 d –1). The general conclusions for the whole sample are also presented in this paper. The statistics of the spacings derived by the sequence search method, by FT (Fourier transform of the frequencies), and the statistics of the shifts are also compared. In many stars more than one almost equally valid spacing appeared. The model frequencies of FG Vir and their rotationally split components were used to formulate the possible explanation that one spacing is the large separation while the other is the sum of the large separation and the rotational frequency. In CoRoT 102675756, the two spacings (2.249 and 1.977 d –1) are in better agreement with the sum of a possible 1.710 d –1 large separation and two or one times, respectively, the value of the rotational frequency.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1258585','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1258585"><span>Unexpected series of regular frequency spacing of δ Scuti stars in the non-asymptotic regime - I. The methodology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Paparo, M.; Benko, J. M.; Hareter, M.</p> <p></p> <p>In this study, a sequence search method was developed to search the regular frequency spacing in δ Scuti stars through visual inspection and an algorithmic search. We searched for sequences of quasi-equally spaced frequencies, containing at least four members per sequence, in 90 δ Scuti stars observed by CoRoT. We found an unexpectedly large number of independent series of regular frequency spacing in 77 δ Scuti stars (from one to eight sequences) in the non-asymptotic regime. We introduce the sequence search method presenting the sequences and echelle diagram of CoRoT 102675756 and the structure of the algorithmic search. Four sequencesmore » (echelle ridges) were found in the 5–21 d –1 region where the pairs of the sequences are shifted (between 0.5 and 0.59 d –1) by twice the value of the estimated rotational splitting frequency (0.269 d –1). The general conclusions for the whole sample are also presented in this paper. The statistics of the spacings derived by the sequence search method, by FT (Fourier transform of the frequencies), and the statistics of the shifts are also compared. In many stars more than one almost equally valid spacing appeared. The model frequencies of FG Vir and their rotationally split components were used to formulate the possible explanation that one spacing is the large separation while the other is the sum of the large separation and the rotational frequency. In CoRoT 102675756, the two spacings (2.249 and 1.977 d –1) are in better agreement with the sum of a possible 1.710 d –1 large separation and two or one times, respectively, the value of the rotational frequency.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70192438','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70192438"><span>The climate space of fire regimes in north-western North America</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Whitman, Ellen; Batllori, Enric; Parisien, Marc-André; Miller, Carol; Coop, Jonathan D.; Krawchuk, Meg A.; Chong, Geneva W.; Haire, Sandra L.</p> <p>2015-01-01</p> <p>Aim. Studies of fire activity along environmental gradients have been undertaken, but the results of such studies have yet to be integrated with fire-regime analysis. We characterize fire-regime components along climate gradients and a gradient of human influence. Location. We focus on a climatically diverse region of north-western North America extending from northern British Columbia, Canada, to northern Utah and Colorado, USA.Methods. We used a multivariate framework to collapse 12 climatic variables into two major climate gradients and binned them into 73 discrete climate domains. We examined variation in fire-regime components (frequency, size, severity, seasonality and cause) across climate domains. Fire-regime attributes were compiled from existing databases and Landsat imagery for 1897 large fires. Relationships among the fire-regime components, climate gradients and human influence were examined through bivariate regressions. The unique contribution of human influence was also assessed.Results. A primary climate gradient of temperature and summer precipitation and a secondary gradient of continentality and winter precipitation in the study area were identified. Fire occupied a distinct central region of such climate space, within which fire-regime components varied considerably. We identified significant interrelations between fire-regime components of fire size, frequency, burn severity and cause. The influence of humans was apparent in patterns of burn severity and ignition cause.Main conclusions. Wildfire activity is highest where thermal and moisture gradients converge to promote fuel production, flammability and ignitions. Having linked fire-regime components to large-scale climate gradients, we show that fire regimes – like the climate that controls them – are a part of a continuum, expanding on models of varying constraints on fire activity. The observed relationships between fire-regime components, together with the distinct role of climatic and human influences, generate variation in biotic communities. Thus, future changes to climate may lead to ecological changes through altered fire regimes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016E%26ES...49h2012S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016E%26ES...49h2012S"><span>Swirling Flow Computation at the Trailing Edge of Radial-Axial Hydraulic Turbines</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Susan-Resiga, Romeo; Muntean, Sebastian; Popescu, Constantin</p> <p>2016-11-01</p> <p>Modern hydraulic turbines require optimized runners within a range of operating points with respect to minimum weighted average draft tube losses and/or flow instabilities. Tractable optimization methodologies must include realistic estimations of the swirling flow exiting the runner and further ingested by the draft tube, prior to runner design. The paper presents a new mathematical model and the associated numerical algorithm for computing the swirling flow at the trailing edge of Francis turbine runner, operated at arbitrary discharge. The general turbomachinery throughflow theory is particularized for an arbitrary hub-to-shroud line in the meridian half-plane and the resulting boundary value problem is solved with the finite element method. The results obtained with the present model are validated against full 3D runner flow computations within a range of discharge value. The mathematical model incorporates the full information for the relative flow direction, as well as the curvatures of the hub-to-shroud line and meridian streamlines, respectively. It is shown that the flow direction can be frozen within a range of operating points in the neighborhood of the best efficiency regime.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19870010554','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19870010554"><span>An optimal output feedback gain variation scheme for the control of plants exhibiting gross parameter changes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Moerder, Daniel D.</p> <p>1987-01-01</p> <p>A concept for optimally designing output feedback controllers for plants whose dynamics exhibit gross changes over their operating regimes was developed. This was to formulate the design problem in such a way that the implemented feedback gains vary as the output of a dynamical system whose independent variable is a scalar parameterization of the plant operating point. The results of this effort include derivation of necessary conditions for optimality for the general problem formulation, and for several simplified cases. The question of existence of a solution to the design problem was also examined, and it was shown that the class of gain variation schemes developed are capable of achieving gain variation histories which are arbitrarily close to the unconstrained gain solution for each point in the plant operating range. The theory was implemented in a feedback design algorithm, which was exercised in a numerical example. The results are applicable to the design of practical high-performance feedback controllers for plants whose dynamics vary significanly during operation. Many aerospace systems fall into this category.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014APS..MAR.T4004C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014APS..MAR.T4004C"><span>Dipolar order by disorder in the classical Heisenberg antiferromagnet on the kagome lattice</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chern, Gia-Wei</p> <p>2014-03-01</p> <p>The first experiments on the ``kagome bilayer'' SCGO triggered a wave of interest in kagome antiferromagnets in particular, and frustrated systems in general. A cluster of early seminal theoretical papers established kagome magnets as model systems for novel ordering phenomena, discussing in particular spin liquidity, partial order, disorder-free glassiness and order by disorder. Despite significant recent progress in understanding the ground state for the quantum S = 1 / 2 model, the nature of the low-temperature phase for the classical kagome Heisenberg antiferromagnet has remained a mystery: the non-linear nature of the fluctuations around the exponentially numerous harmonically degenerate ground states has not permitted a controlled theory, while its complex energy landscape has precluded numerical simulations at low temperature. Here we present an efficient Monte Carlo algorithm which removes the latter obstacle. Our simulations detect a low-temperature regime in which correlations saturate at a remarkably small value. Feeding these results into an effective model and analyzing the results in the framework of an appropriate field theory implies the presence of long-range dipolar spin order with a tripled unit cell.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUOSHE14B1411P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUOSHE14B1411P"><span>Atmospheric form drag over Arctic sea ice derived from high-resolution IceBridge elevation data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Petty, A.; Tsamados, M.; Kurtz, N. T.</p> <p>2016-02-01</p> <p>Here we present a detailed analysis of atmospheric form drag over Arctic sea ice, using high resolution, three-dimensional surface elevation data from the NASA Operation IceBridge Airborne Topographic Mapper (ATM) laser altimeter. Surface features in the sea ice cover are detected using a novel feature-picking algorithm. We derive information regarding the height, spacing and orientation of unique surface features from 2009-2014 across both first-year and multiyear ice regimes. The topography results are used to explicitly calculate atmospheric form drag coefficients; utilizing existing form drag parameterizations. The atmospheric form drag coefficients show strong regional variability, mainly due to variability in ice type/age. The transition from a perennial to a seasonal ice cover therefore suggest a decrease in the atmospheric form drag coefficients over Arctic sea ice in recent decades. These results are also being used to calibrate a recent form drag parameterization scheme included in the sea ice model CICE, to improve the representation of form drag over Arctic sea ice in global climate models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19800021234','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19800021234"><span>Film thickness for different regimes of fluid-film lubrication</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hamrock, B. J.</p> <p>1980-01-01</p> <p>Film thickness equations are provided for four fluid-film lubrication regimes found in elliptical contacts. These regimes are isoviscous-rigid; viscous-rigid; elastohydrodynamic lubrication of low-elastic-modulus materials (soft EHL), or isoviscous-elastic; and elastohydrodynamic lubrication of high-elastic-modulus materials (hard EHL), or viscous-elastic. The influence or lack of influence of elastic and viscous effects is the factor that distinguishes these regimes. The results are presented as a map of the lubrication regimes, with film thickness contours on a log-log grid of the viscosity and elasticity for three values of the ellipticity parameter.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004AIPC..706..187B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004AIPC..706..187B"><span>The Material Point Method and Simulation of Wave Propagation in Heterogeneous Media</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bardenhagen, S. G.; Greening, D. R.; Roessig, K. M.</p> <p>2004-07-01</p> <p>The mechanical response of polycrystalline materials, particularly under shock loading, is of significant interest in a variety of munitions and industrial applications. Homogeneous continuum models have been developed to describe material response, including Equation of State, strength, and reactive burn models. These models provide good estimates of bulk material response. However, there is little connection to underlying physics and, consequently, they cannot be applied far from their calibrated regime with confidence. Both explosives and metals have important structure at the (energetic or single crystal) grain scale. The anisotropic properties of the individual grains and the presence of interfaces result in the localization of energy during deformation. In explosives energy localization can lead to initiation under weak shock loading, and in metals to material ejecta under strong shock loading. To develop accurate, quantitative and predictive models it is imperative to develop a sound physical understanding of the grain-scale material response. Numerical simulations are performed to gain insight into grain-scale material response. The Generalized Interpolation Material Point Method family of numerical algorithms, selected for their robust treatment of large deformation problems and convenient framework for implementing material interface models, are reviewed. A three-dimensional simulation of wave propagation through a granular material indicates the scale and complexity of a representative grain-scale computation. Verification and validation calculations on model bimaterial systems indicate the minimum numerical algorithm complexity required for accurate simulation of wave propagation across material interfaces and demonstrate the importance of interfacial decohesion. Preliminary results are presented which predict energy localization at the grain boundary in a metallic bicrystal.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19990040838','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19990040838"><span>Solving Upwind-Biased Discretizations: Defect-Correction Iterations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Diskin, Boris; Thomas, James L.</p> <p>1999-01-01</p> <p>This paper considers defect-correction solvers for a second order upwind-biased discretization of the 2D convection equation. The following important features are reported: (1) The asymptotic convergence rate is about 0.5 per defect-correction iteration. (2) If the operators involved in defect-correction iterations have different approximation order, then the initial convergence rates may be very slow. The number of iterations required to get into the asymptotic convergence regime might grow on fine grids as a negative power of h. In the case of a second order target operator and a first order driver operator, this number of iterations is roughly proportional to h-1/3. (3) If both the operators have the second approximation order, the defect-correction solver demonstrates the asymptotic convergence rate after three iterations at most. The same three iterations are required to converge algebraic error below the truncation error level. A novel comprehensive half-space Fourier mode analysis (which, by the way, can take into account the influence of discretized outflow boundary conditions as well) for the defect-correction method is developed. This analysis explains many phenomena observed in solving non-elliptic equations and provides a close prediction of the actual solution behavior. It predicts the convergence rate for each iteration and the asymptotic convergence rate. As a result of this analysis, a new very efficient adaptive multigrid algorithm solving the discrete problem to within a given accuracy is proposed. Numerical simulations confirm the accuracy of the analysis and the efficiency of the proposed algorithm. The results of the numerical tests are reported.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002AIPC..608...14K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002AIPC..608...14K"><span>New results in gravity dependent two-phase flow regime mapping</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kurwitz, Cable; Best, Frederick</p> <p>2002-01-01</p> <p>Accurate prediction of thermal-hydraulic parameters, such as the spatial gas/liquid orientation or flow regime, is required for implementation of two-phase systems. Although many flow regime transition models exist, accurate determination of both annular and slug regime boundaries is not well defined especially at lower flow rates. Furthermore, models typically indicate the regime as a sharp transition where data may indicate a transition space. Texas A&M has flown in excess of 35 flights aboard the NASA KC-135 aircraft with a unique two-phase package. These flights have produced a significant database of gravity dependent two-phase data including visual observations for flow regime identification. Two-phase flow tests conducted during recent zero-g flights have added to the flow regime database and are shown in this paper with comparisons to selected transition models. .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900009959','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900009959"><span>Rapid near-optimal trajectory generation and guidance law development for single-stage-to-orbit airbreathing vehicles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Calise, A. J.; Flandro, G. A.; Corban, J. E.</p> <p>1990-01-01</p> <p>General problems associated with on-board trajectory optimization, propulsion system cycle selection, and with the synthesis of guidance laws were addressed for an ascent to low-earth-orbit of an air-breathing single-stage-to-orbit vehicle. The NASA Generic Hypersonic Aerodynamic Model Example and the Langley Accelerator aerodynamic sets were acquired and implemented. Work related to the development of purely analytic aerodynamic models was also performed at a low level. A generic model of a multi-mode propulsion system was developed that includes turbojet, ramjet, scramjet, and rocket engine cycles. Provisions were made in the dynamic model for a component of thrust normal to the flight path. Computational results, which characterize the nonlinear sensitivity of scramjet performance to changes in vehicle angle of attack, were obtained and incorporated into the engine model. Additional trajectory constraints were introduced: maximum dynamic pressure; maximum aerodynamic heating rate per unit area; angle of attack and lift limits; and limits on acceleration both along and normal to the flight path. The remainder of the effort focused on required modifications to a previously derived algorithm when the model complexity cited above was added. In particular, analytic switching conditions were derived which, under appropriate assumptions, govern optimal transition from one propulsion mode to another for two cases: the case in which engine cycle operations can overlap, and the case in which engine cycle operations are mutually exclusive. The resulting guidance algorithm was implemented in software and exercised extensively. It was found that the approximations associated with the assumed time scale separation employed in this work are reasonable except over the Mach range from roughly 5 to 8. This phenomenon is due to the very large thrust capability of scramjets in this Mach regime when sized to meet the requirement for ascent to orbit. By accounting for flight path angle and flight path angle rate in construction of the flight path over this Mach range, the resulting algorithm provides the means for rapid near-optimal trajectory generation and propulsion cycle selection over the entire Mach range from take-off to orbit.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SMaS...25l5016J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SMaS...25l5016J"><span>Bio-inspired online variable recruitment control of fluidic artificial muscles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jenkins, Tyler E.; Chapman, Edward M.; Bryant, Matthew</p> <p>2016-12-01</p> <p>This paper details the creation of a hybrid variable recruitment control scheme for fluidic artificial muscle (FAM) actuators with an emphasis on maximizing system efficiency and switching control performance. Variable recruitment is the process of altering a system’s active number of actuators, allowing operation in distinct force regimes. Previously, FAM variable recruitment was only quantified with offline, manual valve switching; this study addresses the creation and characterization of novel, on-line FAM switching control algorithms. The bio-inspired algorithms are implemented in conjunction with a PID and model-based controller, and applied to a simulated plant model. Variable recruitment transition effects and chatter rejection are explored via a sensitivity analysis, allowing a system designer to weigh tradeoffs in actuator modeling, algorithm choice, and necessary hardware. Variable recruitment is further developed through simulation of a robotic arm tracking a variety of spline position inputs, requiring several levels of actuator recruitment. Switching controller performance is quantified and compared with baseline systems lacking variable recruitment. The work extends current variable recruitment knowledge by creating novel online variable recruitment control schemes, and exploring how online actuator recruitment affects system efficiency and control performance. Key topics associated with implementing a variable recruitment scheme, including the effects of modeling inaccuracies, hardware considerations, and switching transition concerns are also addressed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JCoPh.316..534S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JCoPh.316..534S"><span>Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sandhu, Rimple; Poirel, Dominique; Pettit, Chris; Khalil, Mohammad; Sarkar, Abhijit</p> <p>2016-07-01</p> <p>A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid-structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic system leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib-Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22572328-bayesian-inference-nonlinear-unsteady-aerodynamics-from-aeroelastic-limit-cycle-oscillations','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22572328-bayesian-inference-nonlinear-unsteady-aerodynamics-from-aeroelastic-limit-cycle-oscillations"><span>Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Sandhu, Rimple; Poirel, Dominique; Pettit, Chris</p> <p>2016-07-01</p> <p>A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid–structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic systemmore » leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib–Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000SPIE.3984..338L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000SPIE.3984..338L"><span>Intelligent design optimization of a shape-memory-alloy-actuated reconfigurable wing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lagoudas, Dimitris C.; Strelec, Justin K.; Yen, John; Khan, Mohammad A.</p> <p>2000-06-01</p> <p>The unique thermal and mechanical properties offered by shape memory alloys (SMAs) present exciting possibilities in the field of aerospace engineering. When properly trained, SMA wires act as linear actuators by contracting when heated and returning to their original shape when cooled. It has been shown experimentally that the overall shape of an airfoil can be altered by activating several attached SMA wire actuators. This shape-change can effectively increase the efficiency of a wing in flight at several different flow regimes. To determine the necessary placement of these wire actuators within the wing, an optimization method that incorporates a fully-coupled structural, thermal, and aerodynamic analysis has been utilized. Due to the complexity of the fully-coupled analysis, intelligent optimization methods such as genetic algorithms have been used to efficiently converge to an optimal solution. The genetic algorithm used in this case is a hybrid version with global search and optimization capabilities augmented by the simplex method as a local search technique. For the reconfigurable wing, each chromosome represents a realizable airfoil configuration and its genes are the SMA actuators, described by their location and maximum transformation strain. The genetic algorithm has been used to optimize this design problem to maximize the lift-to-drag ratio for a reconfigured airfoil shape.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3816434','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3816434"><span>Mapping strain rate dependence of dislocation-defect interactions by atomistic simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Fan, Yue; Osetskiy, Yuri N.; Yip, Sidney; Yildiz, Bilge</p> <p>2013-01-01</p> <p>Probing the mechanisms of defect–defect interactions at strain rates lower than 106 s−1 is an unresolved challenge to date to molecular dynamics (MD) techniques. Here we propose an original atomistic approach based on transition state theory and the concept of a strain-dependent effective activation barrier that is capable of simulating the kinetics of dislocation–defect interactions at virtually any strain rate, exemplified within 10−7 to 107 s−1. We apply this approach to the problem of an edge dislocation colliding with a cluster of self-interstitial atoms (SIAs) under shear deformation. Using an activation–relaxation algorithm [Kushima A, et al. (2009) J Chem Phys 130:224504], we uncover a unique strain-rate–dependent trigger mechanism that allows the SIA cluster to be absorbed during the process, leading to dislocation climb. Guided by this finding, we determine the activation barrier of the trigger mechanism as a function of shear strain, and use that in a coarse-graining rate equation formulation for constructing a mechanism map in the phase space of strain rate and temperature. Our predictions of a crossover from a defect recovery at the low strain-rate regime to defect absorption behavior in the high strain-rate regime are validated against our own independent, direct MD simulations at 105 to 107 s−1. Implications of the present approach for probing molecular-level mechanisms in strain-rate regimes previously considered inaccessible to atomistic simulations are discussed. PMID:24114271</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25470323','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25470323"><span>Keeping it in the family: the self-rated health of lone mothers in different European welfare regimes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Van de Velde, Sarah; Bambra, Clare; Van der Bracht, Koen; Eikemo, Terje Andreas; Bracke, Piet</p> <p>2014-11-01</p> <p>This study examines whether health inequalities exist between lone and cohabiting mothers across Europe, and how these may differ by welfare regime. Data from the European Social Survey were used to compare self-rated general health, limiting long-standing illness and depressive feelings by means of a multi-level logistic regression. The 27 countries included in the analyses are classified into six welfare regimes (Anglo-Saxon, Bismarckian, Southern, Nordic, Central East Europe (CEE) (new EU) and CEE (non-EU). Lone motherhood is defined as mothers not cohabiting with a partner, regardless of their legal marital status. The results indicate that lone mothers are more at risk of poor health than cohabiting mothers. This is most pronounced in the Anglo-Saxon regime for self-rated general health and limiting long-standing illness, while for depressive feelings it is most pronounced in the Bismarckian welfare regime. While the risk difference is smallest in the CEE regimes, both lone and cohabiting mothers also reported the highest levels of poor health compared with the other regimes. The results also show that a vulnerable socioeconomic position is associated with ill-health in lone mothers and that welfare regimes differ in the degree to which they moderate this association. © 2014 The Authors. Sociology of Health & Illness © 2014 Foundation for the Sociology of Health & Illness/John Wiley & Sons Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ERL....12d5003M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ERL....12d5003M"><span>Differences in production, carbon stocks and biodiversity outcomes of land tenure regimes in the Argentine Dry Chaco</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Marinaro, Sofía; Grau, H. Ricardo; Gasparri, Néstor Ignacio; Kuemmerle, Tobias; Baumann, Matthias</p> <p>2017-04-01</p> <p>Rising global demand for agricultural products results in agricultural expansion and intensification, with substantial environmental trade-offs. The South American Dry Chaco contains some of the fastest expanding agricultural frontiers worldwide, and includes diverse forms of land management, mainly associated with different land tenure regimes; which in turn are segregated along environmental gradients (mostly rainfall). Yet, how these regimes impact the environment and how trade-offs between production and environmental outcomes varies remains poorly understood. Here, we assessed how biodiversity, biomass stocks, and agricultural production, measured in meat-equivalents, differ among land tenure regimes in the Dry Chaco. We calculated a land-use outcome index (LUO) that combines indices comparing actual vs. potential values of ‘preservation of biodiversity’ (PI), ‘standing biomass’ (BI) and ‘meat production’ (MI). We found land-use outcomes to vary substantially among land-tenure regimes. Protected areas showed a biodiversity index of 0.75, similar to that of large and medium-sized farms (0.72 in both farming systems), and higher than in the other tenure regimes. Biomass index was similar among land tenure regimes, whereas we found the highest median meat production index on indigenous lands (MI = 0.35). Land-use outcomes, however, varied more across different environmental conditions than across land tenure regimes. Our results suggest that in the Argentine Dry Chaco, there is no single land tenure regime that better minimizes the trade-offs between production and environmental outcomes. A useful approach to manage these trade-offs would be to develop geographically explicit guidelines for land-use zoning, identifying the land tenure regimes more appropriate for each zone.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017NatSR...745382W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017NatSR...745382W"><span>Floquet prethermalization and regimes of heating in a periodically driven, interacting quantum system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Weidinger, Simon A.; Knap, Michael</p> <p>2017-04-01</p> <p>We study the regimes of heating in the periodically driven O(N)-model, which is a well established model for interacting quantum many-body systems. By computing the absorbed energy with a non-equilibrium Keldysh Green’s function approach, we establish three dynamical regimes: at short times a single-particle dominated regime, at intermediate times a stable Floquet prethermal regime in which the system ceases to absorb, and at parametrically late times a thermalizing regime. Our simulations suggest that in the thermalizing regime the absorbed energy grows algebraically in time with an exponent that approaches the universal value of 1/2, and is thus significantly slower than linear Joule heating. Our results demonstrate the parametric stability of prethermal states in a many-body system driven at frequencies that are comparable to its microscopic scales. This paves the way for realizing exotic quantum phases, such as time crystals or interacting topological phases, in the prethermal regime of interacting Floquet systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1326059-simulations-stretching-flexible-polyelectrolyte-varying-charge-separation','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1326059-simulations-stretching-flexible-polyelectrolyte-varying-charge-separation"><span>Simulations of stretching a flexible polyelectrolyte with varying charge separation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Stevens, Mark J.; Saleh, Omar A.</p> <p>2016-07-22</p> <p>We calculated the force-extension curves for a flexible polyelectrolyte chain with varying charge separations by performing Monte Carlo simulations of a 5000 bead chain using a screened Coulomb interaction. At all charge separations, the force-extension curves exhibit a Pincus-like scaling regime at intermediate forces and a logarithmic regime at large forces. As the charge separation increases, the Pincus regime shifts to a larger range of forces and the logarithmic regime starts are larger forces. We also found that force-extension curve for the corresponding neutral chain has a logarithmic regime. Decreasing the diameter of bead in the neutral chain simulations removedmore » the logarithmic regime, and the force-extension curve tends to the freely jointed chain limit. In conclusion, this result shows that only excluded volume is required for the high force logarithmic regime to occur.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMDI43A0339P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMDI43A0339P"><span>New Numerical Approaches for Modeling Thermochemical Convection in a Compositionally Stratified Fluid</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Puckett, E. G.; Turcotte, D. L.; He, Y.; Lokavarapu, H. V.; Robey, J.; Kellogg, L. H.</p> <p>2017-12-01</p> <p>Geochemical observations of mantle-derived rocks favor a nearly homogeneous upper mantle, the source of mid-ocean ridge basalts (MORB), and heterogeneous lower mantle regions.Plumes that generate ocean island basalts are thought to sample the lower mantle regions and exhibit more heterogeneity than MORB.These regions have been associated with lower mantle structures known as large low shear velocity provinces below Africa and the South Pacific.The isolation of these regions is attributed to compositional differences and density stratification that, consequently, have been the subject of computational and laboratory modeling designed to determine the parameter regime in which layering is stable and understanding how layering evolves.Mathematical models of persistent compositional interfaces in the Earth's mantle may be inherently unstable, at least in some regions of the parameter space relevant to the mantle.Computing approximations to solutions of such problems presents severe challenges, even to state-of-the-art numerical methods.Some numerical algorithms for modeling the interface between distinct compositions smear the interface at the boundary between compositions, such as methods that add numerical diffusion or `artificial viscosity' in order to stabilize the algorithm. We present two new algorithms for maintaining high-resolution and sharp computational boundaries in computations of these types of problems: a discontinuous Galerkin method with a bound preserving limiter and a Volume-of-Fluid interface tracking algorithm.We compare these new methods with two approaches widely used for modeling the advection of two distinct thermally driven compositional fields in mantle convection computations: a high-order accurate finite element advection algorithm with entropy viscosity and a particle method.We compare the performance of these four algorithms on three problems, including computing an approximation to the solution of an initially compositionally stratified fluid at Ra = 105 with buoyancy numbers {B} that vary from no stratification at B = 0 to stratified flow at large B.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMIN53D..08G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMIN53D..08G"><span>Status of the NPP and J1 NOAA Unique Combined Atmospheric Processing System (NUCAPS): recent algorithm enhancements geared toward validation and near real time users applications.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gambacorta, A.; Nalli, N. R.; Tan, C.; Iturbide-Sanchez, F.; Wilson, M.; Zhang, K.; Xiong, X.; Barnet, C. D.; Sun, B.; Zhou, L.; Wheeler, A.; Reale, A.; Goldberg, M.</p> <p>2017-12-01</p> <p>The NOAA Unique Combined Atmospheric Processing System (NUCAPS) is the NOAA operational algorithm to retrieve thermodynamic and composition variables from hyper spectral thermal sounders such as CrIS, IASI and AIRS. The combined use of microwave sounders, such as ATMS, AMSU and MHS, enables full atmospheric sounding of the atmospheric column under all-sky conditions. NUCAPS retrieval products are accessible in near real time (about 1.5 hour delay) through the NOAA Comprehensive Large Array-data Stewardship System (CLASS). Since February 2015, NUCAPS retrievals have been also accessible via Direct Broadcast, with unprecedented low latency of less than 0.5 hours. NUCAPS builds on a long-term, multi-agency investment on algorithm research and development. The uniqueness of this algorithm consists in a number of features that are key in providing highly accurate and stable atmospheric retrievals, suitable for real time weather and air quality applications. Firstly, maximizing the use of the information content present in hyper spectral thermal measurements forms the foundation of the NUCAPS retrieval algorithm. Secondly, NUCAPS is a modular, name-list driven design. It can process multiple hyper spectral infrared sounders (on Aqua, NPP, MetOp and JPSS series) by mean of the same exact retrieval software executable and underlying spectroscopy. Finally, a cloud-clearing algorithm and a synergetic use of microwave radiance measurements enable full vertical sounding of the atmosphere, under all-sky regimes. As we transition toward improved hyper spectral missions, assessing retrieval skill and consistency across multiple platforms becomes a priority for real time users applications. Focus of this presentation is a general introduction on the recent improvements in the delivery of the NUCAPS full spectral resolution upgrade and an overview of the lessons learned from the 2017 Hazardous Weather Test bed Spring Experiment. Test cases will be shown on the use of NPP and MetOp NUCAPS under pre-convective, capping inversion and dry layer intrusion events.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3740857','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3740857"><span>Recent burning of boreal forests exceeds fire regime limits of the past 10,000 years</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kelly, Ryan; Chipman, Melissa L.; Higuera, Philip E.; Stefanova, Ivanka; Brubaker, Linda B.; Hu, Feng Sheng</p> <p>2013-01-01</p> <p>Wildfire activity in boreal forests is anticipated to increase dramatically, with far-reaching ecological and socioeconomic consequences. Paleorecords are indispensible for elucidating boreal fire regime dynamics under changing climate, because fire return intervals and successional cycles in these ecosystems occur over decadal to centennial timescales. We present charcoal records from 14 lakes in the Yukon Flats of interior Alaska, one of the most flammable ecoregions of the boreal forest biome, to infer causes and consequences of fire regime change over the past 10,000 y. Strong correspondence between charcoal-inferred and observational fire records shows the fidelity of sedimentary charcoal records as archives of past fire regimes. Fire frequency and area burned increased ∼6,000–3,000 y ago, probably as a result of elevated landscape flammability associated with increased Picea mariana in the regional vegetation. During the Medieval Climate Anomaly (MCA; ∼1,000–500 cal B.P.), the period most similar to recent decades, warm and dry climatic conditions resulted in peak biomass burning, but severe fires favored less-flammable deciduous vegetation, such that fire frequency remained relatively stationary. These results suggest that boreal forests can sustain high-severity fire regimes for centuries under warm and dry conditions, with vegetation feedbacks modulating climate–fire linkages. The apparent limit to MCA burning has been surpassed by the regional fire regime of recent decades, which is characterized by exceptionally high fire frequency and biomass burning. This extreme combination suggests a transition to a unique regime of unprecedented fire activity. However, vegetation dynamics similar to feedbacks that occurred during the MCA may stabilize the fire regime, despite additional warming. PMID:23878258</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23878258','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23878258"><span>Recent burning of boreal forests exceeds fire regime limits of the past 10,000 years.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kelly, Ryan; Chipman, Melissa L; Higuera, Philip E; Stefanova, Ivanka; Brubaker, Linda B; Hu, Feng Sheng</p> <p>2013-08-06</p> <p>Wildfire activity in boreal forests is anticipated to increase dramatically, with far-reaching ecological and socioeconomic consequences. Paleorecords are indispensible for elucidating boreal fire regime dynamics under changing climate, because fire return intervals and successional cycles in these ecosystems occur over decadal to centennial timescales. We present charcoal records from 14 lakes in the Yukon Flats of interior Alaska, one of the most flammable ecoregions of the boreal forest biome, to infer causes and consequences of fire regime change over the past 10,000 y. Strong correspondence between charcoal-inferred and observational fire records shows the fidelity of sedimentary charcoal records as archives of past fire regimes. Fire frequency and area burned increased ∼6,000-3,000 y ago, probably as a result of elevated landscape flammability associated with increased Picea mariana in the regional vegetation. During the Medieval Climate Anomaly (MCA; ∼1,000-500 cal B.P.), the period most similar to recent decades, warm and dry climatic conditions resulted in peak biomass burning, but severe fires favored less-flammable deciduous vegetation, such that fire frequency remained relatively stationary. These results suggest that boreal forests can sustain high-severity fire regimes for centuries under warm and dry conditions, with vegetation feedbacks modulating climate-fire linkages. The apparent limit to MCA burning has been surpassed by the regional fire regime of recent decades, which is characterized by exceptionally high fire frequency and biomass burning. This extreme combination suggests a transition to a unique regime of unprecedented fire activity. However, vegetation dynamics similar to feedbacks that occurred during the MCA may stabilize the fire regime, despite additional warming.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19860056504&hterms=PRIM+algorithm&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DPRIM%2Balgorithm','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19860056504&hterms=PRIM+algorithm&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DPRIM%2Balgorithm"><span>A new approach for solving the three-dimensional steady Euler equations. I - General theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chang, S.-C.; Adamczyk, J. J.</p> <p>1986-01-01</p> <p>The present iterative procedure combines the Clebsch potentials and the Munk-Prim (1947) substitution principle with an extension of a semidirect Cauchy-Riemann solver to three dimensions, in order to solve steady, inviscid three-dimensional rotational flow problems in either subsonic or incompressible flow regimes. This solution procedure can be used, upon discretization, to obtain inviscid subsonic flow solutions in a 180-deg turning channel. In addition to accurately predicting the behavior of weak secondary flows, the algorithm can generate solutions for strong secondary flows and will yield acceptable flow solutions after only 10-20 outer loop iterations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26580029','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26580029"><span>Comparison between Mean Forces and Swarms-of-Trajectories String Methods.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Maragliano, Luca; Roux, Benoît; Vanden-Eijnden, Eric</p> <p>2014-02-11</p> <p>The original formulation of the string method in collective variable space is compared with a recent variant called string method with swarms-of-trajectories. The assumptions made in the original method are revisited and the significance of the minimum free energy path (MFEP) is discussed in the context of reactive events. These assumptions are compared to those made in the string method with swarms-of-trajectories, and shown to be equivalent in a certain regime: in particular an expression for the path identified by the swarms-of-trajectories method is given and shown to be closely related to the MFEP. Finally, the algorithmic aspects of both methods are compared.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21308284-tachyon-quintessence-brane-worlds','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21308284-tachyon-quintessence-brane-worlds"><span>Tachyon and quintessence in brane worlds</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Chimento, Luis P.; Forte, Monica; Richarte, Martin G.</p> <p>2009-04-15</p> <p>Using tachyon or quintessence fields along with a barotropic fluid on the brane we examine the different cosmological stages in a Friedmann-Robertson-Walker universe, from the first radiation scenario to the later era dominated by cosmic string networks. We introduce a new algorithm to generalize previous works on exact solutions and apply it to study tachyon and quintessence fields localized on the brane. We also explore the low and high energy regimes of the solutions. Besides, we show that the tachyon and quintessence fields are driven by an inverse power law potential. Finally, we find several simple exacts solutions for tachyonmore » and/or quintessence fields.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1986JCoPh..60...23C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1986JCoPh..60...23C"><span>A new approach for solving the three-dimensional steady Euler equations. I - General theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chang, S.-C.; Adamczyk, J. J.</p> <p>1986-08-01</p> <p>The present iterative procedure combines the Clebsch potentials and the Munk-Prim (1947) substitution principle with an extension of a semidirect Cauchy-Riemann solver to three dimensions, in order to solve steady, inviscid three-dimensional rotational flow problems in either subsonic or incompressible flow regimes. This solution procedure can be used, upon discretization, to obtain inviscid subsonic flow solutions in a 180-deg turning channel. In addition to accurately predicting the behavior of weak secondary flows, the algorithm can generate solutions for strong secondary flows and will yield acceptable flow solutions after only 10-20 outer loop iterations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19860004338','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19860004338"><span>Scattering Models and Basic Experiments in the Microwave Regime</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Fung, A. K.; Blanchard, A. J. (Principal Investigator)</p> <p>1985-01-01</p> <p>The objectives of research over the next three years are: (1) to develop a randomly rough surface scattering model which is applicable over the entire frequency band; (2) to develop a computer simulation method and algorithm to simulate scattering from known randomly rough surfaces, Z(x,y); (3) to design and perform laboratory experiments to study geometric and physical target parameters of an inhomogeneous layer; (4) to develop scattering models for an inhomogeneous layer which accounts for near field interaction and multiple scattering in both the coherent and the incoherent scattering components; and (5) a comparison between theoretical models and measurements or numerical simulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JChPh.143m4106J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JChPh.143m4106J"><span>Surface hopping, transition state theory and decoherence. I. Scattering theory and time-reversibility</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jain, Amber; Herman, Michael F.; Ouyang, Wenjun; Subotnik, Joseph E.</p> <p>2015-10-01</p> <p>We provide an in-depth investigation of transmission coefficients as computed using the augmented-fewest switches surface hopping algorithm in the low energy regime. Empirically, microscopic reversibility is shown to hold approximately. Furthermore, we show that, in some circumstances, including decoherence on top of surface hopping calculations can help recover (as opposed to destroy) oscillations in the transmission coefficient as a function of energy; these oscillations can be studied analytically with semiclassical scattering theory. Finally, in the spirit of transition state theory, we also show that transmission coefficients can be calculated rather accurately starting from the curve crossing point and running trajectories forwards and backwards.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvL.120f3607B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvL.120f3607B"><span>Incomplete Detection of Nonclassical Phase-Space Distributions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bohmann, M.; Tiedau, J.; Bartley, T.; Sperling, J.; Silberhorn, C.; Vogel, W.</p> <p>2018-02-01</p> <p>We implement the direct sampling of negative phase-space functions via unbalanced homodyne measurement using click-counting detectors. The negativities significantly certify nonclassical light in the high-loss regime using a small number of detectors which cannot resolve individual photons. We apply our method to heralded single-photon states and experimentally demonstrate the most significant certification of nonclassicality for only two detection bins. By contrast, the frequently applied Wigner function fails to directly indicate such quantum characteristics for the quantum efficiencies present in our setup without applying additional reconstruction algorithms. Therefore, we realize a robust and reliable approach to characterize nonclassical light in phase space under realistic conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvL.120l5504B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvL.120l5504B"><span>Abnormal Strain Rate Sensitivity Driven by a Unit Dislocation-Obstacle Interaction in bcc Fe</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bai, Zhitong; Fan, Yue</p> <p>2018-03-01</p> <p>The interaction between an edge dislocation and a sessile vacancy cluster in bcc Fe is investigated over a wide range of strain rates from 108 down to 103 s-1 , which is enabled by employing an energy landscape-based atomistic modeling algorithm. It is observed that, at low strain rates regime less than 105 s-1 , such interaction leads to a surprising negative strain rate sensitivity behavior because of the different intermediate microstructures emerged under the complex interplays between thermal activation and applied strain rate. Implications of our findings regarding the previously established global diffusion model are also discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20000121222','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20000121222"><span>An Energy Decaying Scheme for Nonlinear Dynamics of Shells</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bottasso, Carlo L.; Bauchau, Olivier A.; Choi, Jou-Young; Bushnell, Dennis M. (Technical Monitor)</p> <p>2000-01-01</p> <p>A novel integration scheme for nonlinear dynamics of geometrically exact shells is developed based on the inextensible director assumption. The new algorithm is designed so as to imply the strict decay of the system total mechanical energy at each time step, and consequently unconditional stability is achieved in the nonlinear regime. Furthermore, the scheme features tunable high frequency numerical damping and it is therefore stiffly accurate. The method is tested for a finite element spatial formulation of shells based on mixed interpolations of strain tensorial components and on a two-parameter representation of director rotations. The robustness of the, scheme is illustrated with the help of numerical examples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20405047','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20405047"><span>Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, Part II: proofs of results.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Orellana, Liliana; Rotnitzky, Andrea; Robins, James M</p> <p>2010-03-03</p> <p>In this companion article to "Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part I: Main Content" [Orellana, Rotnitzky and Robins (2010), IJB, Vol. 6, Iss. 2, Art. 7] we present (i) proofs of the claims in that paper, (ii) a proposal for the computation of a confidence set for the optimal index when this lies in a finite set, and (iii) an example to aid the interpretation of the positivity assumption.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22660839-assessment-mean-field-mixed-semiclassical-approaches-equilibrium-populations-algorithm-stability','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22660839-assessment-mean-field-mixed-semiclassical-approaches-equilibrium-populations-algorithm-stability"><span>An assessment of mean-field mixed semiclassical approaches: Equilibrium populations and algorithm stability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Bellonzi, Nicole; Jain, Amber; Subotnik, Joseph E.</p> <p>2016-04-21</p> <p>We study several recent mean-field semiclassical dynamics methods, focusing on the ability to recover detailed balance for long time (equilibrium) populations. We focus especially on Miller and Cotton’s [J. Phys. Chem. A 117, 7190 (2013)] suggestion to include both zero point electronic energy and windowing on top of Ehrenfest dynamics. We investigate three regimes: harmonic surfaces with weak electronic coupling, harmonic surfaces with strong electronic coupling, and anharmonic surfaces with weak electronic coupling. In most cases, recent additions to Ehrenfest dynamics are a strong improvement upon mean-field theory. However, for methods that include zero point electronic energy, we show thatmore » anharmonic potential energy surfaces often lead to numerical instabilities, as caused by negative populations and forces. We also show that, though the effect of negative forces can appear hidden in harmonic systems, the resulting equilibrium limits do remain dependent on any windowing and zero point energy parameters.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28375232','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28375232"><span>Wind profiling for a coherent wind Doppler lidar by an auto-adaptive background subtraction approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wu, Yanwei; Guo, Pan; Chen, Siying; Chen, He; Zhang, Yinchao</p> <p>2017-04-01</p> <p>Auto-adaptive background subtraction (AABS) is proposed as a denoising method for data processing of the coherent Doppler lidar (CDL). The method is proposed specifically for a low-signal-to-noise-ratio regime, in which the drifting power spectral density of CDL data occurs. Unlike the periodogram maximum (PM) and adaptive iteratively reweighted penalized least squares (airPLS), the proposed method presents reliable peaks and is thus advantageous in identifying peak locations. According to the analysis results of simulated and actually measured data, the proposed method outperforms the airPLS method and the PM algorithm in the furthest detectable range. The proposed method improves the detection range approximately up to 16.7% and 40% when compared to the airPLS method and the PM method, respectively. It also has smaller mean wind velocity and standard error values than the airPLS and PM methods. The AABS approach improves the quality of Doppler shift estimates and can be applied to obtain the whole wind profiling by the CDL.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4632188','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4632188"><span>A robust and effective time-independent route to the calculation of Resonance Raman spectra of large molecules in condensed phases with the inclusion of Duschinsky, Herzberg-Teller, anharmonic, and environmental effects</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Egidi, Franco; Bloino, Julien; Cappelli, Chiara; Barone, Vincenzo</p> <p>2015-01-01</p> <p>We present an effective time-independent implementation to model vibrational resonance Raman (RR) spectra of medium-large molecular systems with the inclusion of Franck-Condon (FC) and Herzberg-Teller (HT) effects and a full account of the possible differences between the harmonic potential energy surfaces of the ground and resonant electronic states. Thanks to a number of algorithmic improvements and very effective parallelization, the full computations of fundamentals, overtones, and combination bands can be routinely performed for large systems possibly involving more than two electronic states. In order to improve the accuracy of the results, an effective inclusion of the leading anharmonic effects is also possible, together with environmental contributions under different solvation regimes. Reduced-dimensionality approaches can further enlarge the range of applications of this new tool. Applications to imidazole, pyrene, and chlorophyll a1 in solution are reported, as well as comparisons with available experimental data. PMID:26550003</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvA..96d2302T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvA..96d2302T"><span>Error rates and resource overheads of encoded three-qubit gates</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.</p> <p>2017-10-01</p> <p>A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource-intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (ccz), are equivalent options that are also more suited for implementing some quantum algorithms, for instance, those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical ccz with pieceable fault tolerance, a nontransversal method for implementing logical gates. We provide a comparison with a nonlocal magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault tolerance is a promising candidate for fault tolerance in a near-future quantum computer.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27841602','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27841602"><span>Phase transition in the parametric natural visibility graph.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Snarskii, A A; Bezsudnov, I V</p> <p>2016-10-01</p> <p>We investigate time series by mapping them to the complex networks using a parametric natural visibility graph (PNVG) algorithm that generates graphs depending on arbitrary continuous parameter-the angle of view. We study the behavior of the relative number of clusters in PNVG near the critical value of the angle of view. Artificial and experimental time series of different nature are used for numerical PNVG investigations to find critical exponents above and below the critical point as well as the exponent in the finite size scaling regime. Altogether, they allow us to find the critical exponent of the correlation length for PNVG. The set of calculated critical exponents satisfies the basic Widom relation. The PNVG is found to demonstrate scaling behavior. Our results reveal the similarity between the behavior of the relative number of clusters in PNVG and the order parameter in the second-order phase transitions theory. We show that the PNVG is another example of a system (in addition to magnetic, percolation, superconductivity, etc.) with observed second-order phase transition.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22551279-measurement-transient-gas-flow-parameters-diode-laser-absorption-spectroscopy','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22551279-measurement-transient-gas-flow-parameters-diode-laser-absorption-spectroscopy"><span>Measurement of transient gas flow parameters by diode laser absorption spectroscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Bolshov, M A; Kuritsyn, Yu A; Liger, V V</p> <p>2015-04-30</p> <p>An absorption spectrometer based on diode lasers is developed for measuring two-dimension maps of temperature and water vapour concentration distributions in the combustion zones of two mixing supersonic flows of fuel and oxidiser in the single run regime. The method of measuring parameters of hot combustion zones is based on detection of transient spectra of water vapour absorption. The design of the spectrometer considerably reduces the influence of water vapour absorption along the path of a sensing laser beam outside the burning chamber. The optical scheme is developed, capable of matching measurement results in different runs of mixture burning. Amore » new algorithm is suggested for obtaining information about the mixture temperature by constructing the correlation functions of the experimental spectrum with those simulated from databases. A two-dimensional map of temperature distribution in a test chamber is obtained for the first time under the conditions of plasma-induced combusion of the ethylene – air mixture. (laser applications and other topics in quantum electronics)« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011PhRvE..84f6110P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011PhRvE..84f6110P"><span>Scaling properties and universality of first-passage-time probabilities in financial markets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Perelló, Josep; Gutiérrez-Roig, Mario; Masoliver, Jaume</p> <p>2011-12-01</p> <p>Financial markets provide an ideal frame for the study of crossing or first-passage time events of non-Gaussian correlated dynamics, mainly because large data sets are available. Tick-by-tick data of six futures markets are herein considered, resulting in fat-tailed first-passage time probabilities. The scaling of the return with its standard deviation collapses the probabilities of all markets examined—and also for different time horizons—into single curves, suggesting that first-passage statistics is market independent (at least for high-frequency data). On the other hand, a very closely related quantity, the survival probability, shows, away from the center and tails of the distribution, a hyperbolic t-1/2 decay typical of a Markovian dynamics, albeit the existence of memory in markets. Modifications of the Weibull and Student distributions are good candidates for the phenomenological description of first-passage time properties under certain regimes. The scaling strategies shown may be useful for risk control and algorithmic trading.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SpWea..15.1300M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SpWea..15.1300M"><span>Intelligent Sampling of Hazardous Particle Populations in Resource-Constrained Environments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>McCollough, J. P.; Quinn, J. M.; Starks, M. J.; Johnston, W. R.</p> <p>2017-10-01</p> <p>Sampling of anomaly-causing space environment drivers is necessary for both real-time operations and satellite design efforts, and optimizing measurement sampling helps minimize resource demands. Relating these measurements to spacecraft anomalies requires the ability to resolve spatial and temporal variability in the energetic charged particle hazard of interest. Here we describe a method for sampling particle fluxes informed by magnetospheric phenomenology so that, along a given trajectory, the variations from both temporal dynamics and spatial structure are adequately captured while minimizing oversampling. We describe the coordinates, sampling method, and specific regions and parameters employed. We compare resulting sampling cadences with data from spacecraft spanning the regions of interest during a geomagnetically active period, showing that the algorithm retains the gross features necessary to characterize environmental impacts on space systems in diverse orbital regimes while greatly reducing the amount of sampling required. This enables sufficient environmental specification within a resource-constrained context, such as limited telemetry bandwidth, processing requirements, and timeliness.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvB..96j4306G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvB..96j4306G"><span>Nonequilibrium ab initio molecular dynamics determination of Ti monovacancy migration rates in B 1 TiN</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gambino, D.; Sangiovanni, D. G.; Alling, B.; Abrikosov, I. A.</p> <p>2017-09-01</p> <p>We use the color diffusion (CD) algorithm in nonequilibrium (accelerated) ab initio molecular dynamics simulations to determine Ti monovacancy jump frequencies in NaCl-structure titanium nitride (TiN), at temperatures ranging from 2200 to 3000 K. Our results show that the CD method extended beyond the linear-fitting rate-versus-force regime [Sangiovanni et al., Phys. Rev. B 93, 094305 (2016), 10.1103/PhysRevB.93.094305] can efficiently determine metal vacancy migration rates in TiN, despite the low mobilities of lattice defects in this type of ceramic compound. We propose a computational method based on gamma-distribution statistics, which provides unambiguous definition of nonequilibrium and equilibrium (extrapolated) vacancy jump rates with corresponding statistical uncertainties. The acceleration-factor achieved in our implementation of nonequilibrium molecular dynamics increases dramatically for decreasing temperatures from 500 for T close to the melting point Tm, up to 33 000 for T ≈0.7 Tm .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1407099-measurement-inclusive-jet-dijet-cross-sections-proton-proton-collisions-tev-centre-mass-energy-atlas-detector','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1407099-measurement-inclusive-jet-dijet-cross-sections-proton-proton-collisions-tev-centre-mass-energy-atlas-detector"><span>Measurement of inclusive jet and dijet cross sections in proton-proton collisions at 7 TeV centre-of-mass energy with the ATLAS detector</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Aad, G.; Abbott, B.; Abdallah, J.; ...</p> <p>2011-02-03</p> <p>Jet cross sections have been measured for the first time in proton-proton collisions at a centre-of-mass energy of 7 TeV using the ATLAS detector. The measurement uses an integrated luminosity of 17 nb -1 recorded at the Large Hadron Collider. The anti-k t algorithm is used to identify jets, with two jet resolution parameters, R=0.4 and 0.6. The dominant uncertainty comes from the jet energy scale, which is determined to within 7% for central jets above 60 GeV transverse momentum. Inclusive single-jet differential cross sections are presented as functions of jet transverse momentum and rapidity. Dijet cross sections are presentedmore » as functions of dijet mass and the angular variable χ. The results are compared to expectations based on next-to-leading-order QCD, which agree with the data, providing a validation of the theory in a new kinematic regime.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JChPh.148x1704S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JChPh.148x1704S"><span>Gaussian process regression to accelerate geometry optimizations relying on numerical differentiation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schmitz, Gunnar; Christiansen, Ove</p> <p>2018-06-01</p> <p>We study how with means of Gaussian Process Regression (GPR) geometry optimizations, which rely on numerical gradients, can be accelerated. The GPR interpolates a local potential energy surface on which the structure is optimized. It is found to be efficient to combine results on a low computational level (HF or MP2) with the GPR-calculated gradient of the difference between the low level method and the target method, which is a variant of explicitly correlated Coupled Cluster Singles and Doubles with perturbative Triples correction CCSD(F12*)(T) in this study. Overall convergence is achieved if both the potential and the geometry are converged. Compared to numerical gradient-based algorithms, the number of required single point calculations is reduced. Although introducing an error due to the interpolation, the optimized structures are sufficiently close to the minimum of the target level of theory meaning that the reference and predicted minimum only vary energetically in the μEh regime.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1368066','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1368066"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kolodrubetz, Daniel W.; Pietrulewicz, Piotr; Stewart, Iain W.</p> <p></p> <p>To predict the jet mass spectrum at a hadron collider it is crucial to account for the resummation of logarithms between the transverse momentum of the jet and its invariant mass m J . For small jet areas there are additional large logarithms of the jet radius R, which affect the convergence of the perturbative series. We present an analytic framework for exclusive jet production at the LHC which gives a complete description of the jet mass spectrum including realistic jet algorithms and jet vetoes. It factorizes the scales associated with m J , R, and the jet veto, enablingmore » in addition the systematic resummation of jet radius logarithms in the jet mass spectrum beyond leading logarithmic order. We discuss the factorization formulae for the peak and tail region of the jet mass spectrum and for small and large R, and the relations between the different regimes and how to combine them. Regions of experimental interest are classified which do not involve large nonglobal logarithms. We also present universal results for nonperturbative effects and discuss various jet vetoes.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24730793','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24730793"><span>Critical behavior of a relativistic Bose gas.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pandita, P N</p> <p>2014-03-01</p> <p>We show that the thermodynamic behavior of relativistic ideal Bose gas, recently studied numerically by Grether et al., can be obtained analytically. Using the analytical results, we obtain the critical behavior of the relativistic Bose gas exactly for all the regimes. We show that these analytical results reduce to those of Grether et al. in different regimes of the Bose gas. Furthermore, we also obtain an analytically closed-form expression for the energy density for the Bose gas that is valid in all regimes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28295561','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28295561"><span>Effects of national forest-management regimes on unprotected forests of the Himalaya.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Brandt, Jodi S; Allendorf, Teri; Radeloff, Volker; Brooks, Jeremy</p> <p>2017-12-01</p> <p>Globally, deforestation continues, and although protected areas effectively protect forests, the majority of forests are not in protected areas. Thus, how effective are different management regimes to avoid deforestation in non-protected forests? We sought to assess the effectiveness of different national forest-management regimes to safeguard forests outside protected areas. We compared 2000-2014 deforestation rates across the temperate forests of 5 countries in the Himalaya (Bhutan, Nepal, China, India, and Myanmar) of which 13% are protected. We reviewed the literature to characterize forest management regimes in each country and conducted a quasi-experimental analysis to measure differences in deforestation of unprotected forests among countries and states in India. Countries varied in both overarching forest-management goals and specific tenure arrangements and policies for unprotected forests, from policies emphasizing economic development to those focused on forest conservation. Deforestation rates differed up to 1.4% between countries, even after accounting for local determinants of deforestation, such as human population density, market access, and topography. The highest deforestation rates were associated with forest policies aimed at maximizing profits and unstable tenure regimes. Deforestation in national forest-management regimes that emphasized conservation and community management were relatively low. In India results were consistent with the national-level results. We interpreted our results in the context of the broader literature on decentralized, community-based natural resource management, and our findings emphasize that the type and quality of community-based forestry programs and the degree to which they are oriented toward sustainable use rather than economic development are important for forest protection. Our cross-national results are consistent with results from site- and regional-scale studies that show forest-management regimes that ensure stable land tenure and integrate local-livelihood benefits with forest conservation result in the best forest outcomes. © 2017 Society for Conservation Biology.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29456256','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29456256"><span>Clustering the Orion B giant molecular cloud based on its molecular emission.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bron, Emeric; Daudon, Chloé; Pety, Jérôme; Levrier, François; Gerin, Maryvonne; Gratier, Pierre; Orkisz, Jan H; Guzman, Viviana; Bardeau, Sébastien; Goicoechea, Javier R; Liszt, Harvey; Öberg, Karin; Peretto, Nicolas; Sievers, Albrecht; Tremblin, Pascal</p> <p>2018-02-01</p> <p>Previous attempts at segmenting molecular line maps of molecular clouds have focused on using position-position-velocity data cubes of a single molecular line to separate the spatial components of the cloud. In contrast, wide field spectral imaging over a large spectral bandwidth in the (sub)mm domain now allows one to combine multiple molecular tracers to understand the different physical and chemical phases that constitute giant molecular clouds (GMCs). We aim at using multiple tracers (sensitive to different physical processes and conditions) to segment a molecular cloud into physically/chemically similar regions (rather than spatially connected components), thus disentangling the different physical/chemical phases present in the cloud. We use a machine learning clustering method, namely the Meanshift algorithm, to cluster pixels with similar molecular emission, ignoring spatial information. Clusters are defined around each maximum of the multidimensional Probability Density Function (PDF) of the line integrated intensities. Simple radiative transfer models were used to interpret the astrophysical information uncovered by the clustering analysis. A clustering analysis based only on the J = 1 - 0 lines of three isotopologues of CO proves suffcient to reveal distinct density/column density regimes ( n H ~ 100 cm -3 , ~ 500 cm -3 , and > 1000 cm -3 ), closely related to the usual definitions of diffuse, translucent and high-column-density regions. Adding two UV-sensitive tracers, the J = 1 - 0 line of HCO + and the N = 1 - 0 line of CN, allows us to distinguish two clearly distinct chemical regimes, characteristic of UV-illuminated and UV-shielded gas. The UV-illuminated regime shows overbright HCO + and CN emission, which we relate to a photochemical enrichment effect. We also find a tail of high CN/HCO + intensity ratio in UV-illuminated regions. Finer distinctions in density classes ( n H ~ 7 × 10 3 cm -3 ~ 4 × 10 4 cm -3 ) for the densest regions are also identified, likely related to the higher critical density of the CN and HCO + (1 - 0) lines. These distinctions are only possible because the high-density regions are spatially resolved. Molecules are versatile tracers of GMCs because their line intensities bear the signature of the physics and chemistry at play in the gas. The association of simultaneous multi-line, wide-field mapping and powerful machine learning methods such as the Meanshift clustering algorithm reveals how to decode the complex information available in these molecular tracers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26582510','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26582510"><span>Genotype by watering regime interaction in cultivated tomato: lessons from linkage mapping and gene expression.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Albert, Elise; Gricourt, Justine; Bertin, Nadia; Bonnefoi, Julien; Pateyron, Stéphanie; Tamby, Jean-Philippe; Bitton, Frédérique; Causse, Mathilde</p> <p>2016-02-01</p> <p>In tomato, genotype by watering interaction resulted from genotype re-ranking more than scale changes. Interactive QTLs according to watering regime were detected. Differentially expressed genes were identified in some intervals. As a result of climate change, drought will increasingly limit crop production in the future. Studying genotype by watering regime interactions is necessary to improve plant adaptation to low water availability. In cultivated tomato (Solanum lycopersicum L.), extensively grown in dry areas, well-mastered water deficits can stimulate metabolite production, increasing plant defenses and concentration of compounds involved in fruit quality, at the same time. However, few tomato Quantitative Trait Loci (QTLs) and genes involved in response to drought are identified or only in wild species. In this study, we phenotyped a population of 119 recombinant inbred lines derived from a cross between a cherry tomato and a large fruit tomato, grown in greenhouse under two watering regimes, in two locations. A large genetic variability was measured for 19 plant and fruit traits, under the two watering treatments. Highly significant genotype by watering regime interactions were detected and resulted from re-ranking more than scale changes. The population was genotyped for 679 SNP markers to develop a genetic map. In total, 56 QTLs were identified among which 11 were interactive between watering regimes. These later mainly exhibited antagonist effects according to watering treatment. Variation in gene expression in leaves of parental accessions revealed 2259 differentially expressed genes, among which candidate genes presenting sequence polymorphisms were identified under two main interactive QTLs. Our results provide knowledge about the genetic control of genotype by watering regime interactions in cultivated tomato and the possible use of deficit irrigation to improve tomato quality.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhFl...30d0901Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhFl...30d0901Y"><span>An implicit scheme with memory reduction technique for steady state solutions of DVBE in all flow regimes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, L. M.; Shu, C.; Yang, W. M.; Wu, J.</p> <p>2018-04-01</p> <p>High consumption of memory and computational effort is the major barrier to prevent the widespread use of the discrete velocity method (DVM) in the simulation of flows in all flow regimes. To overcome this drawback, an implicit DVM with a memory reduction technique for solving a steady discrete velocity Boltzmann equation (DVBE) is presented in this work. In the method, the distribution functions in the whole discrete velocity space do not need to be stored, and they are calculated from the macroscopic flow variables. As a result, its memory requirement is in the same order as the conventional Euler/Navier-Stokes solver. In the meantime, it is more efficient than the explicit DVM for the simulation of various flows. To make the method efficient for solving flow problems in all flow regimes, a prediction step is introduced to estimate the local equilibrium state of the DVBE. In the prediction step, the distribution function at the cell interface is calculated by the local solution of DVBE. For the flow simulation, when the cell size is less than the mean free path, the prediction step has almost no effect on the solution. However, when the cell size is much larger than the mean free path, the prediction step dominates the solution so as to provide reasonable results in such a flow regime. In addition, to further improve the computational efficiency of the developed scheme in the continuum flow regime, the implicit technique is also introduced into the prediction step. Numerical results showed that the proposed implicit scheme can provide reasonable results in all flow regimes and increase significantly the computational efficiency in the continuum flow regime as compared with the existing DVM solvers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28348238','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28348238"><span>Evolving polycentric governance of the Great Barrier Reef.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Morrison, Tiffany H</p> <p>2017-04-11</p> <p>A growing field of sustainability science examines how environments are transformed through polycentric governance. However, many studies are only snapshot analyses of the initial design or the emergent structure of polycentric regimes. There is less systematic analysis of the longitudinal robustness of polycentric regimes. The problem of robustness is approached by focusing not only on the structure of a regime but also on its context and effectiveness. These dimensions are examined through a longitudinal analysis of the Great Barrier Reef (GBR) governance regime, drawing on in-depth interviews and demographic, economic, and employment data, as well as organizational records and participant observation. Between 1975 and 2011, the GBR regime evolved into a robust polycentric structure as evident in an established set of multiactor, multilevel arrangements addressing marine, terrestrial, and global threats. However, from 2005 onward, multiscale drivers precipitated at least 10 types of regime change, ranging from contextual change that encouraged regime drift to deliberate changes that threatened regime conversion. More recently, regime realignment also has occurred in response to steering by international organizations and shocks such as the 2016 mass coral-bleaching event. The results show that structural density and stability in a governance regime can coexist with major changes in that regime's context and effectiveness. Clear analysis of the vulnerability of polycentric governance to both diminishing effectiveness and the masking effects of increasing complexity provides sustainability science and governance actors with a stronger basis to understand and respond to regime change.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70014597','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70014597"><span>Solute transport with equilibrium aqueous complexation and either sorption or ion exchange: Simulation methodology and applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Lewis, F.M.; Voss, C.I.; Rubin, J.</p> <p>1987-01-01</p> <p>Methodologies that account for specific types of chemical reactions in the simulation of solute transport can be developed so they are compatible with solution algorithms employed in existing transport codes. This enables the simulation of reactive transport in complex multidimensional flow regimes, and provides a means for existing codes to account for some of the fundamental chemical processes that occur among transported solutes. Two equilibrium-controlled reaction systems demonstrate a methodology for accommodating chemical interaction into models of solute transport. One system involves the sorption of a given chemical species, as well as two aqueous complexations in which the sorbing species is a participant. The other reaction set involves binary ion exchange coupled with aqueous complexation involving one of the exchanging species. The methodology accommodates these reaction systems through the addition of nonlinear terms to the transport equations for the sorbing species. Example simulation results show (1) the effect equilibrium chemical parameters have on the spatial distributions of concentration for complexing solutes; (2) that an interrelationship exists between mechanical dispersion and the various reaction processes; (3) that dispersive parameters of the porous media cannot be determined from reactive concentration distributions unless the reaction is accounted for or the influence of the reaction is negligible; (4) how the concentration of a chemical species may be significantly affected by its participation in an aqueous complex with a second species which also sorbs; and (5) that these coupled chemical processes influencing reactive transport can be demonstrated in two-dimensional flow regimes. ?? 1987.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PEPI..276...10P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PEPI..276...10P"><span>New numerical approaches for modeling thermochemical convection in a compositionally stratified fluid</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Puckett, Elbridge Gerry; Turcotte, Donald L.; He, Ying; Lokavarapu, Harsha; Robey, Jonathan M.; Kellogg, Louise H.</p> <p>2018-03-01</p> <p>Geochemical observations of mantle-derived rocks favor a nearly homogeneous upper mantle, the source of mid-ocean ridge basalts (MORB), and heterogeneous lower mantle regions. Plumes that generate ocean island basalts are thought to sample the lower mantle regions and exhibit more heterogeneity than MORB. These regions have been associated with lower mantle structures known as large low shear velocity provinces (LLSVPS) below Africa and the South Pacific. The isolation of these regions is attributed to compositional differences and density stratification that, consequently, have been the subject of computational and laboratory modeling designed to determine the parameter regime in which layering is stable and understanding how layering evolves. Mathematical models of persistent compositional interfaces in the Earth's mantle may be inherently unstable, at least in some regions of the parameter space relevant to the mantle. Computing approximations to solutions of such problems presents severe challenges, even to state-of-the-art numerical methods. Some numerical algorithms for modeling the interface between distinct compositions smear the interface at the boundary between compositions, such as methods that add numerical diffusion or 'artificial viscosity' in order to stabilize the algorithm. We present two new algorithms for maintaining high-resolution and sharp computational boundaries in computations of these types of problems: a discontinuous Galerkin method with a bound preserving limiter and a Volume-of-Fluid interface tracking algorithm. We compare these new methods with two approaches widely used for modeling the advection of two distinct thermally driven compositional fields in mantle convection computations: a high-order accurate finite element advection algorithm with entropy viscosity and a particle method that carries a scalar quantity representing the location of each compositional field. All four algorithms are implemented in the open source finite element code ASPECT, which we use to compute the velocity, pressure, and temperature associated with the underlying flow field. We compare the performance of these four algorithms on three problems, including computing an approximation to the solution of an initially compositionally stratified fluid at Ra =105 with buoyancy numbers B that vary from no stratification at B = 0 to stratified flow at large B .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20130012859','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20130012859"><span>Toward a Physical Characterization of Raindrop Collision Outcome Regimes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Testik, F. Y.; Barros, Ana P.; Bilven, Francis L.</p> <p>2011-01-01</p> <p>A comprehensive raindrop collision outcome regime diagram that delineates the physical conditions associated with the outcome regimes (i.e., bounce, coalescence, and different breakup types) of binary raindrop collisions is proposed. The proposed diagram builds on a theoretical regime diagram defined in the phase space of collision Weber numbers We and the drop diameter ratio p by including critical angle of impact considerations. In this study, the theoretical regime diagram is first evaluated against a comprehensive dataset for drop collision experiments representative of raindrop collisions in nature. Subsequently, the theoretical regime diagram is modified to explicitly describe the dominant regimes of raindrop interactions in (We, p) by delineating the physical conditions necessary for the occurrence of distinct types of collision-induced breakup (neck/filament, sheet, disk, and crown breakups) based on critical angle of impact consideration. Crown breakup is a subtype of disk breakup for lower collision kinetic energy that presents distinctive morphology. Finally, the experimental results are analyzed in the context of the comprehensive collision regime diagram, and conditional probabilities that can be used in the parameterization of breakup kernels in stochastic models of raindrop dynamics are provided.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29438794','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29438794"><span>Discrete multi-physics simulations of diffusive and convective mass transfer in boundary layers containing motile cilia in lungs.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ariane, Mostapha; Kassinos, Stavros; Velaga, Sitaram; Alexiadis, Alessio</p> <p>2018-04-01</p> <p>In this paper, the mass transfer coefficient (permeability) of boundary layers containing motile cilia is investigated by means of discrete multi-physics. The idea is to understand the main mechanisms of mass transport occurring in a ciliated-layer; one specific application being inhaled drugs in the respiratory epithelium. The effect of drug diffusivity, cilia beat frequency and cilia flexibility is studied. Our results show the existence of three mass transfer regimes. A low frequency regime, which we called shielding regime, where the presence of the cilia hinders mass transport; an intermediate frequency regime, which we have called diffusive regime, where diffusion is the controlling mechanism; and a high frequency regime, which we have called convective regime, where the degree of bending of the cilia seems to be the most important factor controlling mass transfer in the ciliated-layer. Since the flexibility of the cilia and the frequency of the beat changes with age and health conditions, the knowledge of these three regimes allows prediction of how mass transfer varies with these factors. Copyright © 2018 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140013289','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140013289"><span>An Examination of the Nature of Global MODIS Cloud Regimes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Oreopoulos, Lazaros; Cho, Nayeong; Lee, Dongmin; Kato, Seiji; Huffman, George J.</p> <p>2014-01-01</p> <p>We introduce global cloud regimes (previously also referred to as "weather states") derived from cloud retrievals that use measurements by the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Aqua and Terra satellites. The regimes are obtained by applying clustering analysis on joint histograms of retrieved cloud top pressure and cloud optical thickness. By employing a compositing approach on data sets from satellites and other sources, we examine regime structural and thermodynamical characteristics. We establish that the MODIS cloud regimes tend to form in distinct dynamical and thermodynamical environments and have diverse profiles of cloud fraction and water content. When compositing radiative fluxes from the Clouds and the Earth's Radiant Energy System instrument and surface precipitation from the Global Precipitation Climatology Project, we find that regimes with a radiative warming effect on the atmosphere also produce the largest implied latent heat. Taken as a whole, the results of the study corroborate the usefulness of the cloud regime concept, reaffirm the fundamental nature of the regimes as appropriate building blocks for cloud system classification, clarify their association with standard cloud types, and underscore their distinct radiative and hydrological signatures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110015768','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110015768"><span>The NASA CloudSat/GPM Light Precipitation Validation Experiment (LPVEx)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Petersen, Walter A.; L'Ecuyer, Tristan; Moisseev, Dmitri</p> <p>2011-01-01</p> <p>Ground-based measurements of cool-season precipitation at mid and high latitudes (e.g., above 45 deg N/S) suggest that a significant fraction of the total precipitation volume falls in the form of light rain, i.e., at rates less than or equal to a few mm/h. These cool-season light rainfall events often originate in situations of a low-altitude (e.g., lower than 2 km) melting level and pose a significant challenge to the fidelity of all satellite-based precipitation measurements, especially those relying on the use of multifrequency passive microwave (PMW) radiometers. As a result, significant disagreements exist between satellite estimates of rainfall accumulation poleward of 45 deg. Ongoing efforts to develop, improve, and ultimately evaluate physically-based algorithms designed to detect and accurately quantify high latitude rainfall, however, suffer from a general lack of detailed, observationally-based ground validation datasets. These datasets serve as a physically consistent framework from which to test and refine algorithm assumptions, and as a means to build the library of algorithm retrieval databases in higher latitude cold-season light precipitation regimes. These databases are especially relevant to NASA's CloudSat and Global Precipitation Measurement (GPM) ground validation programs that are collecting high-latitude precipitation measurements in meteorological systems associated with frequent coolseason light precipitation events. In an effort to improve the inventory of cool-season high-latitude light precipitation databases and advance the physical process assumptions made in satellite-based precipitation retrieval algorithm development, the CloudSat and GPM mission ground validation programs collaborated with the Finnish Meteorological Institute (FMI), the University of Helsinki (UH), and Environment Canada (EC) to conduct the Light Precipitation Validation Experiment (LPVEx). The LPVEx field campaign was designed to make detailed measurements of cool-season light precipitation by leveraging existing infrastructure in the Helsinki Precipitation Testbed. LPVEx was conducted during the months of September--October, 2010 and featured coordinated ground and airborne remote sensing components designed to observe and quantify the precipitation physics associated with light rain in low-altitude melting layer environments over the Gulf of Finland and neighboring land mass surrounding Helsinki, Finland.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006JHyd..325..241S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006JHyd..325..241S"><span>Predicting streamflow regime metrics for ungauged streamsin Colorado, Washington, and Oregon</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sanborn, Stephen C.; Bledsoe, Brian P.</p> <p>2006-06-01</p> <p>Streamflow prediction in ungauged basins provides essential information for water resources planning and management and ecohydrological studies yet remains a fundamental challenge to the hydrological sciences. A methodology is presented for stratifying streamflow regimes of gauged locations, classifying the regimes of ungauged streams, and developing models for predicting a suite of ecologically pertinent streamflow metrics for these streams. Eighty-four streamflow metrics characterizing various flow regime attributes were computed along with physical and climatic drainage basin characteristics for 150 streams with little or no streamflow modification in Colorado, Washington, and Oregon. The diverse hydroclimatology of the study area necessitates flow regime stratification and geographically independent clusters were identified and used to develop separate predictive models for each flow regime type. Multiple regression models for flow magnitude, timing, and rate of change metrics were quite accurate with many adjusted R2 values exceeding 0.80, while models describing streamflow variability did not perform as well. Separate stratification schemes for high, low, and average flows did not considerably improve models for metrics describing those particular aspects of the regime over a scheme based on the entire flow regime. Models for streams identified as 'snowmelt' type were improved if sites in Colorado and the Pacific Northwest were separated to better stratify the processes driving streamflow in these regions thus revealing limitations of geographically independent streamflow clusters. This study demonstrates that a broad suite of ecologically relevant streamflow characteristics can be accurately modeled across large heterogeneous regions using this framework. Applications of the resulting models include stratifying biomonitoring sites and quantifying linkages between specific aspects of flow regimes and aquatic community structure. In particular, the results bode well for modeling ecological processes related to high-flow magnitude, timing, and rate of change such as the recruitment of fish and riparian vegetation across large regions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2854089','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2854089"><span>Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part II: Proofs of Results*</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Orellana, Liliana; Rotnitzky, Andrea; Robins, James M.</p> <p>2010-01-01</p> <p>In this companion article to “Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part I: Main Content” [Orellana, Rotnitzky and Robins (2010), IJB, Vol. 6, Iss. 2, Art. 7] we present (i) proofs of the claims in that paper, (ii) a proposal for the computation of a confidence set for the optimal index when this lies in a finite set, and (iii) an example to aid the interpretation of the positivity assumption. PMID:20405047</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21546695-weak-measurements-beyond-aharonov-albert-vaidman-formalism','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21546695-weak-measurements-beyond-aharonov-albert-vaidman-formalism"><span>Weak measurements beyond the Aharonov-Albert-Vaidman formalism</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wu Shengjun; Li Yang</p> <p>2011-05-15</p> <p>We extend the idea of weak measurements to the general case, provide a complete treatment, and obtain results for both the regime when the preselected and postselected states (PPS) are almost orthogonal and the regime when they are exactly orthogonal. We surprisingly find that for a fixed interaction strength, there may exist a maximum signal amplification and a corresponding optimum overlap of PPS to achieve it. For weak measurements in the orthogonal regime, we find interesting quantities that play the same role that weak values play in the nonorthogonal regime.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26066028','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26066028"><span>Microbial ecology in a future climate: effects of temperature and moisture on microbial communities of two boreal fens.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Peltoniemi, Krista; Laiho, Raija; Juottonen, Heli; Kiikkilä, Oili; Mäkiranta, Päivi; Minkkinen, Kari; Pennanen, Taina; Penttilä, Timo; Sarjala, Tytti; Tuittila, Eeva-Stiina; Tuomivirta, Tero; Fritze, Hannu</p> <p>2015-07-01</p> <p>Impacts of warming with open-top chambers on microbial communities in wet conditions and in conditions resulting from moderate water-level drawdown (WLD) were studied across 0-50 cm depth in northern and southern boreal sedge fens. Warming alone decreased microbial biomass especially in the northern fen. Impact of warming on microbial PLFA and fungal ITS composition was more obvious in the northern fen and linked to moisture regime and sample depth. Fungal-specific PLFA increased in the surface peat in the drier regime and decreased in layers below 10 cm in the wet regime after warming. OTUs representing Tomentella and Lactarius were observed in drier regime and Mortierella in wet regime after warming in the northern fen. The ectomycorrhizal fungi responded only to WLD. Interestingly, warming together with WLD decreased archaeal 16S rRNA copy numbers in general, and fungal ITS copy numbers in the northern fen. Expectedly, many results indicated that microbial response on warming may be linked to the moisture regime. Results indicated that microbial community in the northern fen representing Arctic soils would be more sensitive to environmental changes. The response to future climate change clearly may vary even within a habitat type, exemplified here by boreal sedge fen. © FEMS 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1915323C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1915323C"><span>A field evaluation of a satellite microwave rainfall sensor network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Caridi, Andrea; Caviglia, Daniele D.; Colli, Matteo; Delucchi, Alessandro; Federici, Bianca; Lanza, Luca G.; Pastorino, Matteo; Randazzo, Andrea; Sguerso, Domenico</p> <p>2017-04-01</p> <p>An innovative environmental monitoring system - Smart Rainfall System (SRS) - that estimates rainfall in real-time by means of the analysis of the attenuation of satellite signals (DVB-S in the microwave Ku band) is presented. Such a system consists in a set of peripheral microwave sensors placed on the field of interest, and connected to a central processing and analysis node. It has been developed jointly by the University of Genoa, with its departments DITEN and DICCA and the Genoese SME "Darts Engineering Srl". This work discusses the rainfall intensity measurements accuracy and sensitivity performance of SRS, based on preliminary results from a field comparison experiment at the urban scale. The test-bed is composed by a set of preliminary measurement sites established from Autumn 2016 in the Genoa (Italy) municipality and the data collected from the sensors during a selection of rainfall events is studied. The availability of point-scale rainfall intensity measurements made by traditional tipping-bucket rain gauges and radar areal observations allows a comparative analysis of the SRS performance. The calibration of the reference rain gauges has been carried out at the laboratories of DICCA using a rainfall simulator and the measurements have been processed taking advantage of advanced algorithms to reduce counting errors. The experimental set-up allows a fine tuning of the retrieval algorithm and a full characterization of the accuracy of the rainfall intensity estimates from the microwave signal attenuation as a function of different precipitation regimes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29718177','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29718177"><span>JRmGRN: Joint reconstruction of multiple gene regulatory networks with common hub genes using data from multiple tissues or conditions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Deng, Wenping; Zhang, Kui; Liu, Sanzhen; Zhao, Patrick; Xu, Shizhong; Wei, Hairong</p> <p>2018-04-30</p> <p>Joint reconstruction of multiple gene regulatory networks (GRNs) using gene expression data from multiple tissues/conditions is very important for understanding common and tissue/condition-specific regulation. However, there are currently no computational models and methods available for directly constructing such multiple GRNs that not only share some common hub genes but also possess tissue/condition-specific regulatory edges. In this paper, we proposed a new graphic Gaussian model for joint reconstruction of multiple gene regulatory networks (JRmGRN), which highlighted hub genes, using gene expression data from several tissues/conditions. Under the framework of Gaussian graphical model, JRmGRN method constructs the GRNs through maximizing a penalized log likelihood function. We formulated it as a convex optimization problem, and then solved it with an alternating direction method of multipliers (ADMM) algorithm. The performance of JRmGRN was first evaluated with synthetic data and the results showed that JRmGRN outperformed several other methods for reconstruction of GRNs. We also applied our method to real Arabidopsis thaliana RNA-seq data from two light regime conditions in comparison with other methods, and both common hub genes and some conditions-specific hub genes were identified with higher accuracy and precision. JRmGRN is available as a R program from: https://github.com/wenpingd. hairong@mtu.edu. Proof of theorem, derivation of algorithm and supplementary data are available at Bioinformatics online.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/21867','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/21867"><span>Mixed-severity fire regimes in the northern Rocky Mountains: consequences of fire exclusion and options for the future</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Stephen F. Arno; David J. Parsons; Robert E. Keane</p> <p>2000-01-01</p> <p>Findings from fire history studies have increasingly indicated that many forest ecosystems in the northern Rocky Mountains were shaped by mixed-severity fire regimes, characterized by fires of variable severities at intervals averaging between about 30 and 100 years. Perhaps because mixed-severity fire regimes and their resulting vegetational patterns are difficult to...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..DPPN11005V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..DPPN11005V"><span>Catastrophic global-avalanche of a hollow pressure filament</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>van Compernolle, B.; Poulos, M. J.; Morales, G. J.</p> <p>2017-10-01</p> <p>New results are presented of a basic heat transport experiment performed in the Large Plasma Device at UCLA. A ring-shaped electron beam source injects low energy electrons along a strong magnetic field into a preexisting, large and cold plasma. The injected electrons are thermalized by Coulomb collisions within a short distance and provide an off-axis heat source that results in a long, hollow, cylindrical region of elevated plasma pressure. The off-axis source is active for a period long compared to the density decay time, i.e., as time progresses the power per particle increases. Two distinct regimes are observed to take place, an early regime dominated by multiple avalanches, identified as a sudden intermittent rearrangement of the pressure profile that repeats under sustained heating, and a second regime dominated by broadband drift-Alfvén fluctuations. The transition between the two regimes is sudden and global, both radially and axially. The initial regime is characterized by peaked density and temperature profiles, while only the peaked temperature profile survives in the second regime. Recent measurements at multiple axial locations provide new insight into the axial dynamics of the global avalanche. Sponsored by NSF Grant 1619505 and by DOE/NSF at BaPSF.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EGUGA..1411969E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EGUGA..1411969E"><span>Reorientation Timescales and Pattern Dynamics for Titan's Dunes: Does the Tail Wag the Dog or the Dragon?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ewing, R. C.; Hayes, A. G.; McCormick, C.; Ballard, C.; Troy, S. A.</p> <p>2012-04-01</p> <p>Fields of bedform patterns persist across many orders of magnitude, from cm-scale sub-aqueous current ripples to km-scale aeolian dunes, and form with surprisingly little difference in expression despite a range of formative environments. Because of the remarkable similarity among bedform patterns, extracting information about climate and environment from these patterns is a challenge. For example, crestline orientation is not diagnostic of a particular flow regime; similar patterns form under many different flow configurations. On Titan, these challenges have played out with many attempts to reconcile dune crestline orientation with modeled and expected wind regimes. We propose that thinking about the time-scale of the change in dune orientation, rather than the orientation itself, can provide new insights on the long-term stability of the dune-field patterns and the formative wind regime. In this work, we apply the crestline re-orientation model developed by Werner and Kocurek [Geology, 1997] to the equatorial dune fields of Titan. We use Cassini Synthetic Aperture Radar images processed through a de-noising algorithm recently developed by Lucas et al. [LPSC, 2012] to measure variations in pattern parameters (crest spacing, crest length and defect density, which is the number of defect pairs per total crest length) both within and between Titan's dune fields to describe pattern maturity and identify areas where changes in dune orientation are likely to occur (or may already be occurring). Measured defect densities are similar to Earth's largest linear dune fields, such as the Namib Sand Sea and the Simpson Desert. We use measured defect densities in the Werner and Kocurek model to estimate crestline reorientation rates. We find reorientation timescales varying from ten to a hundred thousand times the average migration timescale (time to migrate a bedform one meter, ~1 Titan year according to Tokano (Aeolian Research, 2010)). Well-organized patterns have the longest reorientation time scales (~105 migration timescales), while the topographically or spatially isolated patches of dunes show the shortest reorientation times (~103 migration timescales). In addition, comparisons between spacing and defect density reveal that the well-organized patterns plot along an expected trend with Earth and Mars' largest, well-organized fields. Patterns on Earth and Mars that have been degraded and broken by environmental change fall off this trend and similarly, so do the isolated dune patterns on Titan fall suggesting changing environmental conditions such as wind regime and/or sediment availability have influenced the dunes on Titan. Crestline orientations in these areas suggest star and crescentic (barchans) morphologies in addition to linear dunes. Our results suggest that Titan's dunes may react to gross bedform transport averaged over orbital timescales, relaxing the requirement that a single modern wind regime is necessary to produce the observed well-organized dune patterns. We find signals of environmental change within the smallest patterns suggesting that the dunes may be recently reoriented or are reorienting to one component of a longer timescale wind regime with a duty cycle that persists over many seasonal cycles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29782626','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29782626"><span>Weighted SGD for ℓ p Regression with Randomized Preconditioning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W</p> <p>2016-01-01</p> <p>In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. We aim to bridge the gap between these two methods in solving constrained overdetermined linear regression problems-e.g., ℓ 2 and ℓ 1 regression problems. We propose a hybrid algorithm named pwSGD that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system.By rewriting a deterministic ℓ p regression problem as a stochastic optimization problem, we connect pwSGD to several existing ℓ p solvers including RLA methods with algorithmic leveraging (RLA for short).We prove that pwSGD inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity. Such SGD convergence rates are superior to other related SGD algorithm such as the weighted randomized Kaczmarz algorithm.Particularly, when solving ℓ 1 regression with size n by d , pwSGD returns an approximate solution with ε relative error in the objective value in (log n ·nnz( A )+poly( d )/ ε 2 ) time. This complexity is uniformly better than that of RLA methods in terms of both ε and d when the problem is unconstrained. In the presence of constraints, pwSGD only has to solve a sequence of much simpler and smaller optimization problem over the same constraints. In general this is more efficient than solving the constrained subproblem required in RLA.For ℓ 2 regression, pwSGD returns an approximate solution with ε relative error in the objective value and the solution vector measured in prediction norm in (log n ·nnz( A )+poly( d ) log(1/ ε )/ ε ) time. We show that for unconstrained ℓ 2 regression, this complexity is comparable to that of RLA and is asymptotically better over several state-of-the-art solvers in the regime where the desired accuracy ε , high dimension n and low dimension d satisfy d ≥ 1/ ε and n ≥ d 2 / ε . We also provide lower bounds on the coreset complexity for more general regression problems, indicating that still new ideas will be needed to extend similar RLA preconditioning ideas to weighted SGD algorithms for more general regression problems. Finally, the effectiveness of such algorithms is illustrated numerically on both synthetic and real datasets, and the results are consistent with our theoretical findings and demonstrate that pwSGD converges to a medium-precision solution, e.g., ε = 10 -3 , more quickly.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5959301','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5959301"><span>Weighted SGD for ℓp Regression with Randomized Preconditioning*</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W.</p> <p>2018-01-01</p> <p>In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. We aim to bridge the gap between these two methods in solving constrained overdetermined linear regression problems—e.g., ℓ2 and ℓ1 regression problems. We propose a hybrid algorithm named pwSGD that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system.By rewriting a deterministic ℓp regression problem as a stochastic optimization problem, we connect pwSGD to several existing ℓp solvers including RLA methods with algorithmic leveraging (RLA for short).We prove that pwSGD inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity. Such SGD convergence rates are superior to other related SGD algorithm such as the weighted randomized Kaczmarz algorithm.Particularly, when solving ℓ1 regression with size n by d, pwSGD returns an approximate solution with ε relative error in the objective value in 𝒪(log n·nnz(A)+poly(d)/ε2) time. This complexity is uniformly better than that of RLA methods in terms of both ε and d when the problem is unconstrained. In the presence of constraints, pwSGD only has to solve a sequence of much simpler and smaller optimization problem over the same constraints. In general this is more efficient than solving the constrained subproblem required in RLA.For ℓ2 regression, pwSGD returns an approximate solution with ε relative error in the objective value and the solution vector measured in prediction norm in 𝒪(log n·nnz(A)+poly(d) log(1/ε)/ε) time. We show that for unconstrained ℓ2 regression, this complexity is comparable to that of RLA and is asymptotically better over several state-of-the-art solvers in the regime where the desired accuracy ε, high dimension n and low dimension d satisfy d ≥ 1/ε and n ≥ d2/ε. We also provide lower bounds on the coreset complexity for more general regression problems, indicating that still new ideas will be needed to extend similar RLA preconditioning ideas to weighted SGD algorithms for more general regression problems. Finally, the effectiveness of such algorithms is illustrated numerically on both synthetic and real datasets, and the results are consistent with our theoretical findings and demonstrate that pwSGD converges to a medium-precision solution, e.g., ε = 10−3, more quickly. PMID:29782626</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JHEP...10..083G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JHEP...10..083G"><span>Interpolation of hard and soft dilepton rates</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ghisoiu, I.; Laine, M.</p> <p>2014-10-01</p> <p>Strict next-to-leading order (NLO) results for the dilepton production rate from a QCD plasma at temperatures above a few hundred MeV suffer from a breakdown of the loop expansion in the regime of soft invariant masses M 2 ≪ ( πT)2. In this regime an LPM resummation is needed for obtaining the correct leading-order result. We show how to construct an interpolation between the hard NLO and the leading-order LPM expression, which is theoretically consistent in both regimes and free from double counting. The final numerical results are presented in a tabulated form, suitable for insertion into hydrodynamical codes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19910011818','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19910011818"><span>Simulation of nap-of-the-Earth flight in helicopters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Condon, Gregory W.</p> <p>1991-01-01</p> <p>NASA-Ames along with the U.S. Army has conducted extensive simulation studies of rotorcraft in the nap-of-the-Earth (NOE) environment and has developed facility capabilities specifically designed for this flight regime. The experience gained to date in applying these facilities to the NOE flight regime are reported along with the results of specific experimental studies conducted to understand the influence of both motion and visual scene on the fidelity of NOE simulation. Included are comparisons of results from concurrent piloted simulation and flight research studies. The results of a recent simulation experiment to study simulator sickness in this flight regime is also discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70041495','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70041495"><span>Changes in size and trends of North American sea duck populations associated with North Pacific oceanic regime shifts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Flint, Paul L.</p> <p>2013-01-01</p> <p>Broad-scale multi-species declines in populations of North American sea ducks for unknown reasons is cause for management concern. Oceanic regime shifts have been associated with rapid changes in ecosystem structure of the North Pacific and Bering Sea. However, relatively little is known about potential effects of these changes in oceanic conditions on marine bird populations at broad scales. I examined changes in North American breeding populations of sea ducks from 1957 to 2011 in relation to potential oceanic regime shifts in the North Pacific in 1977, 1989, and 1998. There was strong support for population-level effects of regime shifts in 1977 and 1989, but little support for an effect of the 1998 shift. The continental-level effects of these regime shifts differed across species groups and time. Based on patterns of sea duck population dynamics associated with regime shifts, it is unclear if the mechanism of change relates to survival or reproduction. Results of this analysis support the hypothesis that population size and trends of North American sea ducks are strongly influenced by oceanic conditions. The perceived population declines appear to have halted >20 years ago, and populations have been relatively stable or increasing since that time. Given these results, we should reasonably expect dramatic changes in sea duck population status and trends with future oceanic regime shifts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5393255','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5393255"><span>Evolving polycentric governance of the Great Barrier Reef</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Morrison, Tiffany H.</p> <p>2017-01-01</p> <p>A growing field of sustainability science examines how environments are transformed through polycentric governance. However, many studies are only snapshot analyses of the initial design or the emergent structure of polycentric regimes. There is less systematic analysis of the longitudinal robustness of polycentric regimes. The problem of robustness is approached by focusing not only on the structure of a regime but also on its context and effectiveness. These dimensions are examined through a longitudinal analysis of the Great Barrier Reef (GBR) governance regime, drawing on in-depth interviews and demographic, economic, and employment data, as well as organizational records and participant observation. Between 1975 and 2011, the GBR regime evolved into a robust polycentric structure as evident in an established set of multiactor, multilevel arrangements addressing marine, terrestrial, and global threats. However, from 2005 onward, multiscale drivers precipitated at least 10 types of regime change, ranging from contextual change that encouraged regime drift to deliberate changes that threatened regime conversion. More recently, regime realignment also has occurred in response to steering by international organizations and shocks such as the 2016 mass coral-bleaching event. The results show that structural density and stability in a governance regime can coexist with major changes in that regime’s context and effectiveness. Clear analysis of the vulnerability of polycentric governance to both diminishing effectiveness and the masking effects of increasing complexity provides sustainability science and governance actors with a stronger basis to understand and respond to regime change. PMID:28348238</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1935l0001V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1935l0001V"><span>Investigation of vertical transportation of Cs-137 in the different soil types and for the different raining regime</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Varinlioglu, Ahmet; Tugrul, A. Beril</p> <p>2018-02-01</p> <p>Cs-137 is an important fission product that is produced in the nuclear reactors. Therefore it can create risk for the environment during the accident condition of the nuclear plants. In this study, vertical transportation of Cs-137 in the soil searched for three different soil types and three different raining regimes. The experiments observed in lyzimetric conditions in the laboratory. The results of the experiments show that in every raining regime the activities of different types of soils (sand, loam and clay) were related with descending activity order. When the results of the experiments were evaluated according to the raining regime it can be seen that the relative activity for every type of soil is always towards higher to lower raining conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21105980','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21105980"><span>Reserve design for uncertain responses of coral reefs to climate change.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mumby, Peter J; Elliott, Ian A; Eakin, C Mark; Skirving, William; Paris, Claire B; Edwards, Helen J; Enríquez, Susana; Iglesias-Prieto, Roberto; Cherubin, Laurent M; Stevens, Jamie R</p> <p>2011-02-01</p> <p>Rising sea temperatures cause mass coral bleaching and threaten reefs worldwide. We show how maps of variations in thermal stress can be used to help manage reefs for climate change. We map proxies of chronic and acute thermal stress and develop evidence-based hypotheses for the future response of corals to each stress regime. We then incorporate spatially realistic predictions of larval connectivity among reefs of the Bahamas and apply novel reserve design algorithms to create reserve networks for a changing climate. We show that scales of larval dispersal are large enough to connect reefs from desirable thermal stress regimes into a reserve network. Critically, we find that reserve designs differ according to the anticipated scope for phenotypic and genetic adaptation in corals, which remains uncertain. Attempts to provide a complete reserve design that hedged against different evolutionary outcomes achieved limited success, which emphasises the importance of considering the scope for adaptation explicitly. Nonetheless, 15% of reserve locations were selected under all evolutionary scenarios, making them a high priority for early designation. Our approach allows new insights into coral holobiont adaptation to be integrated directly into an adaptive approach to management. © 2010 Blackwell Publishing Ltd/CNRS.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22622302-monte-carlo-method-simulation-coagulation-nucleation-based-weighted-particles-concepts-stochastic-resolution-merging','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22622302-monte-carlo-method-simulation-coagulation-nucleation-based-weighted-particles-concepts-stochastic-resolution-merging"><span>A Monte Carlo method for the simulation of coagulation and nucleation based on weighted particles and the concepts of stochastic resolution and merging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.</p> <p></p> <p>Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15142745','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15142745"><span>A multi-scaled approach for simulating chemical reaction systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Burrage, Kevin; Tian, Tianhai; Burrage, Pamela</p> <p>2004-01-01</p> <p>In this paper we give an overview of some very recent work, as well as presenting a new approach, on the stochastic simulation of multi-scaled systems involving chemical reactions. In many biological systems (such as genetic regulation and cellular dynamics) there is a mix between small numbers of key regulatory proteins, and medium and large numbers of molecules. In addition, it is important to be able to follow the trajectories of individual molecules by taking proper account of the randomness inherent in such a system. We describe different types of simulation techniques (including the stochastic simulation algorithm, Poisson Runge-Kutta methods and the balanced Euler method) for treating simulations in the three different reaction regimes: slow, medium and fast. We then review some recent techniques on the treatment of coupled slow and fast reactions for stochastic chemical kinetics and present a new approach which couples the three regimes mentioned above. We then apply this approach to a biologically inspired problem involving the expression and activity of LacZ and LacY proteins in E. coli, and conclude with a discussion on the significance of this work. Copyright 2004 Elsevier Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyE...97..259K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyE...97..259K"><span>Twin lead ballistic conductor based on nanoribbon edge transport</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Konôpka, Martin; Dieška, Peter</p> <p>2018-03-01</p> <p>If a device like a graphene nanoribbon (GNR) has all its four corners attached to electric current leads, the device becomes a quantum junction through which two electrical circuits can interact. We study such system theoretically for stationary currents. The 4-point energy-dependent conductance matrix of the nanostructure and the classical resistors in the circuits are parameters of the model. The two bias voltages in the circuits are the control variables of the studied system while the electrochemical potentials at the device's terminals are non-trivially dependent on the voltages. For the special case of the linear-response regime analytical formulae for the operation of the coupled quantum-classical device are derived and applied. For higher bias voltages numerical solutions are obtained. The effects of non-equilibrium Fermi levels are captured using a recursive algorithm in which self-consistency between the electrochemical potentials and the currents is reached within few iterations. The developed approach allows to study scenarios ranging from independent circuits to strongly coupled ones. For the chosen model of the GNR with highly conductive zigzag edges we determine the regime in which the single device carries two almost independent currents.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007DPS....39.2414H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007DPS....39.2414H"><span>Quantifying The Effect Of Scattering Upon The Retrieved Dust Opacity In The Martian Atmosphere, As Deduced From Mro/mcs Measurements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Howett, Carly; Irwin, P. G.; Teanby, N.; Calcutt, S. B.; Lolachi, R.; Bowles, N.; Schofield, J. T.; McCleese, D. J.</p> <p>2007-10-01</p> <p>Mars Climate Sounder data from September to November 2006 is analysed to determine the effect of scattering upon the retrieved dust opacity in the atmosphere of Mars. The inclusion of scattering in dust retrievals makes them significantly more computationally expensive. Thus, understanding the regimes in which scattering plays a less significant role could considerably decrease the computational time of analysing the extensive MCS dataset. Temperature profiles were initially retrieved using Nemesis, Oxford University's multivariate retrieval algorithm, at each location using MCS' A1, A2 and A3 channels (595 to 665 cm-1 ).Using these temperature profiles, and by assuming the characteristics of the dust particles to be comparable to those of Wolff and Clancy (2003), the dust opacity was retrieved using the B1 channel of MCS (290 to 340 cm-1 ) with and without scattering. The effect of scattering on the fit to the MCS data and on the derived vertical dust profile at various locations across the planet are presented. Particular emphasis is placed upon understanding the spatial and temporal variations of atmospheric regimes in which scattering plays a significant role.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25480044','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25480044"><span>A method to determine the acoustic reflection and absorption coefficients of porous media by using modal dispersion in a waveguide.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Prisutova, Jevgenija; Horoshenkov, Kirill; Groby, Jean-Philippe; Brouard, Bruno</p> <p>2014-12-01</p> <p>The measurement of acoustic material characteristics using a standard impedance tube method is generally limited to the plane wave regime below the tube cut-on frequency. This implies that the size of the tube and, consequently, the size of the material specimen must remain smaller than a half of the wavelength. This paper presents a method that enables the extension of the frequency range beyond the plane wave regime by at least a factor of 3, so that the size of the material specimen can be much larger than the wavelength. The proposed method is based on measuring of the sound pressure at different axial locations and applying the spatial Fourier transform. A normal mode decomposition approach is used together with an optimization algorithm to minimize the discrepancy between the measured and predicted sound pressure spectra. This allows the frequency and angle dependent reflection and absorption coefficients of the material specimen to be calculated in an extended frequency range. The method has been tested successfully on samples of melamine foam and wood fiber. The measured data are in close agreement with the predictions by the equivalent fluid model for the acoustical properties of porous media.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22622227-comparisons-time-explicit-hybrid-kinetic-fluid-code-architect-plasma-wakefield-acceleration-full-pic-code','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22622227-comparisons-time-explicit-hybrid-kinetic-fluid-code-architect-plasma-wakefield-acceleration-full-pic-code"><span>Comparisons of time explicit hybrid kinetic-fluid code Architect for Plasma Wakefield Acceleration with a full PIC code</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Massimo, F., E-mail: francesco.massimo@ensta-paristech.fr; Dipartimento SBAI, Università di Roma “La Sapienza“, Via A. Scarpa 14, 00161 Roma; Atzeni, S.</p> <p></p> <p>Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for themore » solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1243227-characteristics-aerosol-indirect-effect-based-dynamic-regimes-global-climate-models','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1243227-characteristics-aerosol-indirect-effect-based-dynamic-regimes-global-climate-models"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zhang, S.; Wang, Minghuai; Ghan, Steven J.</p> <p></p> <p>Aerosol-cloud interactions continue to constitute a major source of uncertainty for the estimate of climate radiative forcing. The variation of aerosol indirect effects (AIE) in climate models is investigated across different dynamical regimes, determined by monthly mean 500 hPa vertical pressure velocity (ω500), lower-tropospheric stability (LTS) and large-scale surface precipitation rate derived from several global climate models (GCMs), with a focus on liquid water path (LWP) response to cloud condensation nuclei (CCN) concentrations. The LWP sensitivity to aerosol perturbation within dynamic regimes is found to exhibit a large spread among these GCMs. It is in regimes of strong large-scale ascendmore » (ω500 < -25 hPa/d) and low clouds (stratocumulus and trade wind cumulus) where the models differ most. Shortwave aerosol indirect forcing is also found to differ significantly among different regimes. Shortwave aerosol indirect forcing in ascending regimes is as large as that in stratocumulus regimes, which indicates that regimes with strong large-scale ascend are as important as stratocumulus regimes in studying AIE. 42" It is further shown that shortwave aerosol indirect forcing over regions with high monthly large-scale surface precipitation rate (> 0.1 mm/d) contributes the most to the total aerosol indirect forcing (from 64% to nearly 100%). Results show that the uncertainty in AIE is even larger within specific dynamical regimes than that globally, pointing to the need to reduce the uncertainty in AIE in different dynamical regimes.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22140273-introducing-cafein-new-computational-tool-stellar-pulsations-dynamic-tides','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22140273-introducing-cafein-new-computational-tool-stellar-pulsations-dynamic-tides"><span>INTRODUCING CAFein, A NEW COMPUTATIONAL TOOL FOR STELLAR PULSATIONS AND DYNAMIC TIDES</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Valsecchi, F.; Farr, W. M.; Willems, B.</p> <p>2013-08-10</p> <p>Here we present CAFein, a new computational tool for investigating radiative dissipation of dynamic tides in close binaries and of non-adiabatic, non-radial stellar oscillations in isolated stars in the linear regime. For the latter, CAFein computes the non-adiabatic eigenfrequencies and eigenfunctions of detailed stellar models. The code is based on the so-called Riccati method, a numerical algorithm that has been successfully applied to a variety of stellar pulsators, and which does not suffer from the major drawbacks of commonly used shooting and relaxation schemes. Here we present an extension of the Riccati method to investigate dynamic tides in close binaries.more » We demonstrate CAFein's capabilities as a stellar pulsation code both in the adiabatic and non-adiabatic regimes, by reproducing previously published eigenfrequencies of a polytrope, and by successfully identifying the unstable modes of a stellar model in the {beta} Cephei/SPB region of the Hertzsprung-Russell diagram. Finally, we verify CAFein's behavior in the dynamic tides regime by investigating the effects of dynamic tides on the eigenfunctions and orbital and spin evolution of massive main sequence stars in eccentric binaries, and of hot Jupiter host stars. The plethora of asteroseismic data provided by NASA's Kepler satellite, some of which include the direct detection of tidally excited stellar oscillations, make CAFein quite timely. Furthermore, the increasing number of observed short-period detached double white dwarfs (WDs) and the observed orbital decay in the tightest of such binaries open up a new possibility of investigating WD interiors through the effects of tides on their orbital evolution.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21356174','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21356174"><span>Effects of flow regime and pesticides on periphytic communities: evolution and role of biodiversity.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Villeneuve, Aurélie; Montuelle, Bernard; Bouchez, Agnès</p> <p>2011-04-01</p> <p>The effects of chemical and physical factors on periphyton structure, diversity and functioning were investigated in an outdoor mesocosm experiment. Stream biofilms were subjected to a pesticide mix (diuron and azoxystrobin) under two different hydraulic regimes. The hydraulic regimes differed by spatial variations of flow conditions (turbulent with high variations vs. laminar with low variations). The effects of the hydraulic regime and pesticides were assessed at the level of the periphytic communities. We focused on the change in the biodiversity of these communities under the two hydraulic regimes, and on the role of these biodiversity changes in case of pesticide contamination. Changes in structural (biomass, cell density), diversity (community composition assessed by PCR-DGGE and microscopic analysis) and functional (bacterial and algal production, sensitivity to the herbicide) parameters were monitored throughout a 2-month experiment. The results showed that exposure to pesticides affected the phytobenthic community targeted by the herbicide, impacting on both its growth dynamics and its primary production. Conversely, the impact of the flow regime was greater than that of pesticides on the non-target bacterial community with higher bacterial density and production in laminar mesocosms (uniform regime). An interaction between flow and pollution effects was also observed. Communities that developed in turbulent mesocosms (heterogeneous regime) were more diversified, as a result of increased microhabitat heterogeneity due to high spatial variations. However, this higher biodiversity did not increase the ability of these biofilms to tolerate pesticides, as expected. On the contrary, the sensitivity of these communities to pesticide contamination was, in fact, increased. Copyright © 2011 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JHyd..409..328C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JHyd..409..328C"><span>Identifying natural flow regimes using fish communities</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chang, Fi-John; Tsai, Wen-Ping; Wu, Tzu-Ching; Chen, Hung-kwai; Herricks, Edwin E.</p> <p>2011-10-01</p> <p>SummaryModern water resources management has adopted natural flow regimes as reasonable targets for river restoration and conservation. The characterization of a natural flow regime begins with the development of hydrologic statistics from flow records. However, little guidance exists for defining the period of record needed for regime determination. In Taiwan, the Taiwan Eco-hydrological Indicator System (TEIS), a group of hydrologic statistics selected for fisheries relevance, is being used to evaluate ecological flows. The TEIS consists of a group of hydrologic statistics selected to characterize the relationships between flow and the life history of indigenous species. Using the TEIS and biosurvey data for Taiwan, this paper identifies the length of hydrologic record sufficient for natural flow regime characterization. To define the ecological hydrology of fish communities, this study connected hydrologic statistics to fish communities by using methods to define antecedent conditions that influence existing community composition. A moving average method was applied to TEIS statistics to reflect the effects of antecedent flow condition and a point-biserial correlation method was used to relate fisheries collections with TEIS statistics. The resulting fish species-TEIS (FISH-TEIS) hydrologic statistics matrix takes full advantage of historical flows and fisheries data. The analysis indicates that, in the watersheds analyzed, averaging TEIS statistics for the present year and 3 years prior to the sampling date, termed MA(4), is sufficient to develop a natural flow regime. This result suggests that flow regimes based on hydrologic statistics for the period of record can be replaced by regimes developed for sampled fish communities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010PhyD..239.1798K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010PhyD..239.1798K"><span>Physical understanding of complex multiscale biochemical models via algorithmic simplification: Glycolysis in Saccharomyces cerevisiae</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kourdis, Panayotis D.; Steuer, Ralf; Goussis, Dimitris A.</p> <p>2010-09-01</p> <p>Large-scale models of cellular reaction networks are usually highly complex and characterized by a wide spectrum of time scales, making a direct interpretation and understanding of the relevant mechanisms almost impossible. We address this issue by demonstrating the benefits provided by model reduction techniques. We employ the Computational Singular Perturbation (CSP) algorithm to analyze the glycolytic pathway of intact yeast cells in the oscillatory regime. As a primary object of research for many decades, glycolytic oscillations represent a paradigmatic candidate for studying biochemical function and mechanisms. Using a previously published full-scale model of glycolysis, we show that, due to fast dissipative time scales, the solution is asymptotically attracted on a low dimensional manifold. Without any further input from the investigator, CSP clarifies several long-standing questions in the analysis of glycolytic oscillations, such as the origin of the oscillations in the upper part of glycolysis, the importance of energy and redox status, as well as the fact that neither the oscillations nor cell-cell synchronization can be understood in terms of glycolysis as a simple linear chain of sequentially coupled reactions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016OptEn..55g6107Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016OptEn..55g6107Z"><span>Reconstruction of combustion temperature and gas concentration distributions using line-of-sight tunable diode laser absorption spectroscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Zhirong; Sun, Pengshuai; Pang, Tao; Xia, Hua; Cui, Xiaojuan; Li, Zhe; Han, Luo; Wu, Bian; Wang, Yu; Sigrist, Markus W.; Dong, Fengzhong</p> <p>2016-07-01</p> <p>Spatial temperature and gas concentration distributions are crucial for combustion studies to characterize the combustion position and to evaluate the combustion regime and the released heat quantity. Optical computer tomography (CT) enables the reconstruction of temperature and gas concentration fields in a flame on the basis of line-of-sight tunable diode laser absorption spectroscopy (LOS-TDLAS). A pair of H2O absorption lines at wavelengths 1395.51 and 1395.69 nm is selected. Temperature and H2O concentration distributions for a flat flame furnace are calculated by superimposing two absorption peaks with a discrete algebraic iterative algorithm and a mathematical fitting algorithm. By comparison, direct absorption spectroscopy measurements agree well with the thermocouple measurements and yield a good correlation. The CT reconstruction data of different air-to-fuel ratio combustion conditions (incomplete combustion and full combustion) and three different types of burners (one, two, and three flat flame furnaces) demonstrate that TDLAS has the potential of short response time and enables real-time temperature and gas concentration distribution measurements for combustion diagnosis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EJASP2017...65C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EJASP2017...65C"><span>Sequential estimation of intrinsic activity and synaptic input in single neurons by particle filtering with optimal importance density</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Closas, Pau; Guillamon, Antoni</p> <p>2017-12-01</p> <p>This paper deals with the problem of inferring the signals and parameters that cause neural activity to occur. The ultimate challenge being to unveil brain's connectivity, here we focus on a microscopic vision of the problem, where single neurons (potentially connected to a network of peers) are at the core of our study. The sole observation available are noisy, sampled voltage traces obtained from intracellular recordings. We design algorithms and inference methods using the tools provided by stochastic filtering that allow a probabilistic interpretation and treatment of the problem. Using particle filtering, we are able to reconstruct traces of voltages and estimate the time course of auxiliary variables. By extending the algorithm, through PMCMC methodology, we are able to estimate hidden physiological parameters as well, like intrinsic conductances or reversal potentials. Last, but not least, the method is applied to estimate synaptic conductances arriving at a target cell, thus reconstructing the synaptic excitatory/inhibitory input traces. Notably, the performance of these estimations achieve the theoretical lower bounds even in spiking regimes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016APS..MARK36005W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016APS..MARK36005W"><span>Homogenous Nucleation and Crystal Growth in a Model Liquid from Direct Energy Landscape Sampling Simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Walter, Nathan; Zhang, Yang</p> <p></p> <p>Nucleation and crystal growth are understood to be activated processes involving the crossing of free-energy barriers. Attempts to capture the entire crystallization process over long timescales with molecular dynamic simulations have met major obstacles because of molecular dynamics' temporal constraints. Herein, we circumvent this temporal limitation by using a brutal-force, metadynamics-like, adaptive basin-climbing algorithm and directly sample the free-energy landscape of a model liquid Argon. The algorithm biases the system to evolve from an amorphous liquid like structure towards an FCC crystal through inherent structure, and then traces back the energy barriers. Consequently, the sampled timescale is macroscopically long. We observe that the formation of a crystal involves two processes, each with a unique temperature-dependent energy barrier. One barrier corresponds to the crystal nucleus formation; the other barrier corresponds to the crystal growth. We find the two processes dominate in different temperature regimes. Compared to other computation techniques, our method requires no assumptions about the shape or chemical potential of the critical crystal nucleus. The success of this method is encouraging for studying the crystallization of more complex</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920010138','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920010138"><span>A computer code for multiphase all-speed transient flows in complex geometries. MAST version 1.0</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chen, C. P.; Jiang, Y.; Kim, Y. M.; Shang, H. M.</p> <p>1991-01-01</p> <p>The operation of the MAST code, which computes transient solutions to the multiphase flow equations applicable to all-speed flows, is described. Two-phase flows are formulated based on the Eulerian-Lagrange scheme in which the continuous phase is described by the Navier-Stokes equation (or Reynolds equations for turbulent flows). Dispersed phase is formulated by a Lagrangian tracking scheme. The numerical solution algorithms utilized for fluid flows is a newly developed pressure-implicit algorithm based on the operator-splitting technique in generalized nonorthogonal coordinates. This operator split allows separate operation on each of the variable fields to handle pressure-velocity coupling. The obtained pressure correction equation has the hyperbolic nature and is effective for Mach numbers ranging from the incompressible limit to supersonic flow regimes. The present code adopts a nonstaggered grid arrangement; thus, the velocity components and other dependent variables are collocated at the same grid. A sequence of benchmark-quality problems, including incompressible, subsonic, transonic, supersonic, gas-droplet two-phase flows, as well as spray-combustion problems, were performed to demonstrate the robustness and accuracy of the present code.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20180001221','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20180001221"><span>Formulation and Implementation of Inflow/Outflow Boundary Conditions to Simulate Propulsive Effects</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rodriguez, David L.; Aftosmis, Michael J.; Nemec, Marian</p> <p>2018-01-01</p> <p>Boundary conditions appropriate for simulating flow entering or exiting the computational domain to mimic propulsion effects have been implemented in an adaptive Cartesian simulation package. A robust iterative algorithm to control mass flow rate through an outflow boundary surface is presented, along with a formulation to explicitly specify mass flow rate through an inflow boundary surface. The boundary conditions have been applied within a mesh adaptation framework based on the method of adjoint-weighted residuals. This allows for proper adaptive mesh refinement when modeling propulsion systems. The new boundary conditions are demonstrated on several notional propulsion systems operating in flow regimes ranging from low subsonic to hypersonic. The examples show that the prescribed boundary state is more properly imposed as the mesh is refined. The mass-flowrate steering algorithm is shown to be an efficient approach in each example. To demonstrate the boundary conditions on a realistic complex aircraft geometry, two of the new boundary conditions are also applied to a modern low-boom supersonic demonstrator design with multiple flow inlets and outlets.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26042625','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26042625"><span>An efficient quasi-3D particle tracking-based approach for transport through fractures with application to dynamic dispersion calculation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Lichun; Cardenas, M Bayani</p> <p>2015-08-01</p> <p>The quantitative study of transport through fractured media has continued for many decades, but has often been constrained by observational and computational challenges. Here, we developed an efficient quasi-3D random walk particle tracking (RWPT) algorithm to simulate solute transport through natural fractures based on a 2D flow field generated from the modified local cubic law (MLCL). As a reference, we also modeled the actual breakthrough curves (BTCs) through direct simulations with the 3D advection-diffusion equation (ADE) and Navier-Stokes equations. The RWPT algorithm along with the MLCL accurately reproduced the actual BTCs calculated with the 3D ADE. The BTCs exhibited non-Fickian behavior, including early arrival and long tails. Using the spatial information of particle trajectories, we further analyzed the dynamic dispersion process through moment analysis. From this, asymptotic time scales were determined for solute dispersion to distinguish non-Fickian from Fickian regimes. This analysis illustrates the advantage and benefit of using an efficient combination of flow modeling and RWPT. Copyright © 2015 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/10140495','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/10140495"><span>Multiparticle imaging technique for two-phase fluid flows using pulsed laser speckle velocimetry. Final report, September 1988--November 1992</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Hassan, T.A.</p> <p>1992-12-01</p> <p>The practical use of Pulsed Laser Velocimetry (PLV) requires the use of fast, reliable computer-based methods for tracking numerous particles suspended in a fluid flow. Two methods for performing tracking are presented. One method tracks a particle through multiple sequential images (minimum of four required) by prediction and verification of particle displacement and direction. The other method, requiring only two sequential images uses a dynamic, binary, spatial, cross-correlation technique. The algorithms are tested on computer-generated synthetic data and experimental data which was obtained with traditional PLV methods. This allowed error analysis and testing of the algorithms on real engineering flows.more » A novel method is proposed which eliminates tedious, undersirable, manual, operator assistance in removing erroneous vectors. This method uses an iterative process involving an interpolated field produced from the most reliable vectors. Methods are developed to allow fast analysis and presentation of sets of PLV image data. Experimental investigation of a two-phase, horizontal, stratified, flow regime was performed to determine the interface drag force, and correspondingly, the drag coefficient. A horizontal, stratified flow test facility using water and air was constructed to allow interface shear measurements with PLV techniques. The experimentally obtained local drag measurements were compared with theoretical results given by conventional interfacial drag theory. Close agreement was shown when local conditions near the interface were similar to space-averaged conditions. However, theory based on macroscopic, space-averaged flow behavior was shown to give incorrect results if the local gas velocity near the interface as unstable, transient, and dissimilar from the average gas velocity through the test facility.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/6893012','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/6893012"><span>Multiparticle imaging technique for two-phase fluid flows using pulsed laser speckle velocimetry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Hassan, T.A.</p> <p>1992-12-01</p> <p>The practical use of Pulsed Laser Velocimetry (PLV) requires the use of fast, reliable computer-based methods for tracking numerous particles suspended in a fluid flow. Two methods for performing tracking are presented. One method tracks a particle through multiple sequential images (minimum of four required) by prediction and verification of particle displacement and direction. The other method, requiring only two sequential images uses a dynamic, binary, spatial, cross-correlation technique. The algorithms are tested on computer-generated synthetic data and experimental data which was obtained with traditional PLV methods. This allowed error analysis and testing of the algorithms on real engineering flows.more » A novel method is proposed which eliminates tedious, undersirable, manual, operator assistance in removing erroneous vectors. This method uses an iterative process involving an interpolated field produced from the most reliable vectors. Methods are developed to allow fast analysis and presentation of sets of PLV image data. Experimental investigation of a two-phase, horizontal, stratified, flow regime was performed to determine the interface drag force, and correspondingly, the drag coefficient. A horizontal, stratified flow test facility using water and air was constructed to allow interface shear measurements with PLV techniques. The experimentally obtained local drag measurements were compared with theoretical results given by conventional interfacial drag theory. Close agreement was shown when local conditions near the interface were similar to space-averaged conditions. However, theory based on macroscopic, space-averaged flow behavior was shown to give incorrect results if the local gas velocity near the interface as unstable, transient, and dissimilar from the average gas velocity through the test facility.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JPRS...66..287M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JPRS...66..287M"><span>Imaging spectroscopy in soil-water based site suitability assessment for artificial regeneration to Scots pine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Middleton, Maarit; Närhi, Paavo; Sutinen, Raimo</p> <p></p> <p>In a humid northern boreal climate, the success rate of artificial regeneration to Scots pine ( Pinus sylvestris L.) can be improved by including a soil water content (SWC) based assessment of site suitability in the reforestation planning process. This paper introduces an application of airborne visible-near-infrared imaging spectroscopic data to identify suitable subregions of forest compartments for the low SWC-tolerant Scots pine. The spatial patterns of understorey plant species communities, recorded by the AISA (Airborne Imaging Spectrometer for Applications) sensor, were demonstrated to be dependant on the underlying SWC. According to the nonmetric multidimensional scaling and correlation results twelve understorey species were found to be most abundant on sites with high soil SWCs. The abundance of bare soil, rocks and abundance of more than ten species indicated low soil SWCs. The spatial patterns of understorey are attributed to time-stability of the underlying SWC patterns. A supervised artificial neural network (radial basis functional link network, probabilistic neural network) approach was taken to classify AISA imaging spectrometer data with dielectric (as a measure volumetric SWC) ground referencing into regimes suitable and unsuitable for Scots pine. The accuracy assessment with receiver operating characteristics curves demonstrated a maximum of 74.1% area under the curve values which indicated moderate success of the NN modelling. The results signified the importance of the training set's quality, adequate quantity (>2.43 points/ha) and NN algorithm selection over the NN algorithm training parameter optimization to perfection. This methodology for the analysis of site suitability of Scots pine can be recommended, especially when artificial regeneration of former mixed wood Norway spruce ( Picea abies L. Karst) - downy birch ( Betula pubenscens Ehrh.) stands is being considered, so that artificially regenerated areas to Scots pine can be optimized for forestry purposes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27627408','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27627408"><span>Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bergeron, Dominic; Tremblay, A-M S</p> <p>2016-08-01</p> <p>Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ^{2} with respect to α, and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvE..94b3303B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvE..94b3303B"><span>Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bergeron, Dominic; Tremblay, A.-M. S.</p> <p>2016-08-01</p> <p>Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013MsT..........1N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013MsT..........1N"><span>Seismic attributes and advanced computer algorithm to predict formation pore pressure: Qalibah formation of Northwest Saudi Arabia</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nour, Abdoulshakour M.</p> <p></p> <p>Oil and gas exploration professionals have long recognized the importance of predicting pore pressure before drilling wells. Pre-drill pore pressure estimation not only helps with drilling wells safely but also aids in the determination of formation fluids migration and seal integrity. With respect to the hydrocarbon reservoirs, the appropriate drilling mud weight is directly related to the estimated pore pressure in the formation. If the mud weight is lower than the formation pressure, a blowout may occur, and conversely, if it is higher than the formation pressure, the formation may suffer irreparable damage due to the invasion of drilling fluids into the formation. A simple definition of pore pressure is the pressure of the pore fluids in excess of the hydrostatic pressure. In this thesis, I investigated the utility of advance computer algorithm called Support Vector Machine (SVM) to learn the pattern of high pore pressure regime, using seismic attributes such as Instantaneous phase, t*Attenuation, Cosine of Phase, Vp/Vs ratio, P-Impedance, Reflection Acoustic Impedance, Dominant frequency and one well attribute (Mud-Weigh) as the learning dataset. I applied this technique to the over pressured Qalibah formation of Northwest Saudi Arabia. The results of my research revealed that in the Qalibah formation of Northwest Saudi Arabia, the pore pressure trend can be predicted using SVM with seismic and well attributes as the learning dataset. I was able to show the pore pressure trend at any given point within the geographical extent of the 3D seismic data from which the seismic attributes were derived. In addition, my results surprisingly showed the subtle variation of pressure within the thick succession of shale units of the Qalibah formation.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>